首页 > 最新文献

Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games最新文献

英文 中文
Interactive GPU-based octree generation and traversal 基于交互式gpu的八叉树生成和遍历
Pub Date : 2012-03-09 DOI: 10.1145/2159616.2159657
Chen Wei, J. Gain, P. Marais
GPU-based ray casting, as introduced by Krüger and Westermann [2003], is an effective method for volumetric rendering. Unfortunately, conventional methods of Empty Space Skipping (ESS) using spatial partitioning, which accelerate ray casting by culling ray-surface intersection tests in empty parts of the volume, do not align well with GPU architectures. CPUs are usually required for tree generation and parsing, as well as the data transfer from CPU to GPU. Such CPU-based pre-processing is time-consuming, with the result that spatial tree structures are invariably applied to static datasets.
kr格尔和韦斯特曼[2003]提出的基于gpu的光线投射是一种有效的体绘制方法。遗憾的是,传统的使用空间划分的空白空间跳过(ESS)方法,通过在体积的空白部分剔除光线表面相交测试来加速光线投射,不适合GPU架构。通常需要CPU来生成和解析树,以及从CPU到GPU的数据传输。这种基于cpu的预处理非常耗时,其结果是空间树结构总是应用于静态数据集。
{"title":"Interactive GPU-based octree generation and traversal","authors":"Chen Wei, J. Gain, P. Marais","doi":"10.1145/2159616.2159657","DOIUrl":"https://doi.org/10.1145/2159616.2159657","url":null,"abstract":"GPU-based ray casting, as introduced by Krüger and Westermann [2003], is an effective method for volumetric rendering. Unfortunately, conventional methods of Empty Space Skipping (ESS) using spatial partitioning, which accelerate ray casting by culling ray-surface intersection tests in empty parts of the volume, do not align well with GPU architectures. CPUs are usually required for tree generation and parsing, as well as the data transfer from CPU to GPU. Such CPU-based pre-processing is time-consuming, with the result that spatial tree structures are invariably applied to static datasets.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"26 1","pages":"211"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87518736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Linear compression for spatially-varying BRDFs 空间变化brdf的线性压缩
Pub Date : 2012-03-09 DOI: 10.1145/2159616.2159658
S. Braeger, C. Hughes
The storage requirements for rendering with arbitrary tabular BRDFs can be quite large. This limits the number of BRDFs that can be in used in a scene to only a few. Furthermore, material parameters can be too complex to store and render per-pixel.
使用任意表格brdf进行呈现的存储需求可能非常大。这限制了在一个场景中可以使用的brdf的数量。此外,材料参数可能过于复杂,无法按像素存储和渲染。
{"title":"Linear compression for spatially-varying BRDFs","authors":"S. Braeger, C. Hughes","doi":"10.1145/2159616.2159658","DOIUrl":"https://doi.org/10.1145/2159616.2159658","url":null,"abstract":"The storage requirements for rendering with arbitrary tabular BRDFs can be quite large. This limits the number of BRDFs that can be in used in a scene to only a few. Furthermore, material parameters can be too complex to store and render per-pixel.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"29 1","pages":"212"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91328031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editing and constraining kinematic approximations of dynamic motion 编辑和约束动态运动的运动学逼近
Pub Date : 2012-03-09 DOI: 10.1145/2159616.2159652
Cyrus Rahgoshay, A. Rabbani, Karan Singh, P. Kry
Physical simulation is now a robust and common approach to recreating reality in virtual worlds and is almost universally used in the animation of natural phenomena, ballistic objects, and character accessories like clothing and hair. Despite these great strides, the animation of primary characters continues to be dominated by the kinematic techniques of motion capture and above all traditional keyframing. Two aspects of a primary character in particular, skeletal and facial motion, are often laboriously animated using kinematics. There are perhaps three chief reasons for this. First, kinematics, unencumbered by physics, provides the finest level of control necessary for animators to breathe life and personality into their characters. Second, this control is direct and history-free, in that the authored state of the character, set at any point in time is precisely observed upon playback and its impact on the animation is localized to a neighborhood around that time. Third, animator interaction with the time line is WYSIWYG (what-you-see-is-what-you-get), allowing them to scrub to various points in time and observe the character state without having to playback the entire animation. Secondary dynamics can be overlaid on primarily kinematic character motion to enhance the visceral feel of their characters. But unfortunately compromise the second and third reasons animators rely on pure kinematic control.
物理模拟现在是在虚拟世界中重建现实的一种强大而常见的方法,并且几乎普遍用于自然现象的动画,弹道物体和角色配件(如服装和头发)。尽管有这些巨大的进步,主要角色的动画仍然由运动捕捉的运动学技术和最重要的传统关键帧控制。特别是主要角色的两个方面,骨骼和面部运动,通常使用运动学来制作动画。这可能有三个主要原因。首先,运动学,不受物理的阻碍,为动画师提供了最好的控制水平,为他们的角色注入生命和个性。其次,这种控制是直接的,与历史无关,因为角色在任何时间点的创作状态都是在回放时精确观察到的,它对动画的影响也被定位在那个时间点附近。第三,动画师与时间线的交互是所见即所得(WYSIWYG),允许他们在不同的时间点上观察角色状态,而无需回放整个动画。次要动态可以叠加在主要的运动学角色运动上,以增强角色的内在感觉。但不幸的是,第二个和第三个原因妥协了,动画师依赖于纯粹的运动学控制。
{"title":"Editing and constraining kinematic approximations of dynamic motion","authors":"Cyrus Rahgoshay, A. Rabbani, Karan Singh, P. Kry","doi":"10.1145/2159616.2159652","DOIUrl":"https://doi.org/10.1145/2159616.2159652","url":null,"abstract":"Physical simulation is now a robust and common approach to recreating reality in virtual worlds and is almost universally used in the animation of natural phenomena, ballistic objects, and character accessories like clothing and hair. Despite these great strides, the animation of primary characters continues to be dominated by the kinematic techniques of motion capture and above all traditional keyframing. Two aspects of a primary character in particular, skeletal and facial motion, are often laboriously animated using kinematics. There are perhaps three chief reasons for this. First, kinematics, unencumbered by physics, provides the finest level of control necessary for animators to breathe life and personality into their characters. Second, this control is direct and history-free, in that the authored state of the character, set at any point in time is precisely observed upon playback and its impact on the animation is localized to a neighborhood around that time. Third, animator interaction with the time line is WYSIWYG (what-you-see-is-what-you-get), allowing them to scrub to various points in time and observe the character state without having to playback the entire animation. Secondary dynamics can be overlaid on primarily kinematic character motion to enhance the visceral feel of their characters. But unfortunately compromise the second and third reasons animators rely on pure kinematic control.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"1 1","pages":"206"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83071453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiresolution attributes for tessellated meshes 细分网格的多分辨率属性
Pub Date : 2012-03-09 DOI: 10.1145/2159616.2159645
Henry Schäfer, Magdalena Prus, Quirin Meyer, J. Süßmuth, M. Stamminger
We present a novel representation for storing sub-triangle signals, such as colors, normals, or displacements directly with the triangle mesh. Signal samples are stored as guided by hardware-tessellation patterns. Thus, we can directly render from our representation by assigning signal samples to attributes of vertices generated by the hardware tessellator. Contrary to texture mapping, our approach does not require any atlas generation, chartification, or uv-unwrapping. Thus, it does not suffer from texture-related artifacts, such as discontinuities across chart boundaries or distortion. Moreover, our approach allows specifying the optimal sampling rate adaptively on a per triangle basis, resulting in significant memory savings for most signal types. We propose a signal optimal approach for converting arbitrary signals, including existing assets with textures or mesh colors, into our representation. Further, we provide efficient algorithms for mip-mapping, bi- and tri-linear interpolation directly in our representation. Our approach is optimally suited for displacement mapping: it automatically generates crack-free, view-dependent displacement mapped models enabling continuous level-of-detail.
我们提出了一种新的表示,用于存储子三角形信号,如颜色、法线或直接与三角形网格的位移。信号样本按照硬件镶嵌模式存储。因此,我们可以通过将信号样本分配给硬件镶嵌器生成的顶点的属性来直接渲染我们的表示。与纹理映射相反,我们的方法不需要任何图集生成、图形化或uv展开。因此,它不会受到与纹理相关的工件的影响,例如跨图表边界的不连续或失真。此外,我们的方法允许在每个三角形的基础上自适应地指定最佳采样率,从而大大节省了大多数信号类型的内存。我们提出了一种信号优化方法,用于将任意信号(包括具有纹理或网格颜色的现有资产)转换为我们的表示。此外,我们提供了有效的算法,mip映射,双线性和三线性插值直接在我们的表示。我们的方法最适合于位移映射:它自动生成无裂纹,依赖于视图的位移映射模型,从而实现连续的细节水平。
{"title":"Multiresolution attributes for tessellated meshes","authors":"Henry Schäfer, Magdalena Prus, Quirin Meyer, J. Süßmuth, M. Stamminger","doi":"10.1145/2159616.2159645","DOIUrl":"https://doi.org/10.1145/2159616.2159645","url":null,"abstract":"We present a novel representation for storing sub-triangle signals, such as colors, normals, or displacements directly with the triangle mesh. Signal samples are stored as guided by hardware-tessellation patterns. Thus, we can directly render from our representation by assigning signal samples to attributes of vertices generated by the hardware tessellator.\u0000 Contrary to texture mapping, our approach does not require any atlas generation, chartification, or uv-unwrapping. Thus, it does not suffer from texture-related artifacts, such as discontinuities across chart boundaries or distortion. Moreover, our approach allows specifying the optimal sampling rate adaptively on a per triangle basis, resulting in significant memory savings for most signal types.\u0000 We propose a signal optimal approach for converting arbitrary signals, including existing assets with textures or mesh colors, into our representation. Further, we provide efficient algorithms for mip-mapping, bi- and tri-linear interpolation directly in our representation. Our approach is optimally suited for displacement mapping: it automatically generates crack-free, view-dependent displacement mapped models enabling continuous level-of-detail.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"142 1","pages":"175-182"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78927160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Realtime volume rendering using precomputed photon mapping 实时体渲染使用预先计算的光子映射
Pub Date : 2012-03-09 DOI: 10.1145/2159616.2159663
Yubo Zhang, Z. Dong, K. Ma
In this poster, we present a volume rendering framework that achieves realtime rendering of global illumination effects for volume datasets, such as multiple scattering and volume shadow. This approach incorporates the volumetric photon mapping technique [Jensen and Christensen 1998] into the classical precomputed radiance transfer [Sloan et al. 2002] pipeline. Fig.1 shows that our method is successfully applied in both interactive graphics and scientific visualization applications.
在这张海报中,我们提出了一个体渲染框架,它可以实现体数据集的全局照明效果的实时渲染,例如多次散射和体阴影。该方法将体积光子映射技术[Jensen and Christensen 1998]整合到经典的预计算辐射传输[Sloan et al. 2002]管道中。从图1可以看出,我们的方法成功地应用于交互式图形和科学可视化应用中。
{"title":"Realtime volume rendering using precomputed photon mapping","authors":"Yubo Zhang, Z. Dong, K. Ma","doi":"10.1145/2159616.2159663","DOIUrl":"https://doi.org/10.1145/2159616.2159663","url":null,"abstract":"In this poster, we present a volume rendering framework that achieves realtime rendering of global illumination effects for volume datasets, such as multiple scattering and volume shadow. This approach incorporates the volumetric photon mapping technique [Jensen and Christensen 1998] into the classical precomputed radiance transfer [Sloan et al. 2002] pipeline. Fig.1 shows that our method is successfully applied in both interactive graphics and scientific visualization applications.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"46 1","pages":"217"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74018410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A reconstruction filter for plausible motion blur 一个重建过滤器为合理的运动模糊
Pub Date : 2012-03-09 DOI: 10.1145/2159616.2159639
M. McGuire, P. Hennessy, Michał Bukowski, Brian Osman
This paper describes a novel filter for simulating motion blur phenomena in real time by applying ideas from offline stochastic reconstruction. The filter operates as a 2D post-process on a conventional framebuffer augmented with a screen-space velocity buffer. We demonstrate results on video game scenes rendered and reconstructed in real-time on NVIDIA GeForce 480 and Xbox 360 platforms, and show that the same filter can be applied to cinematic post-processing of offline-rendered images and real photographs. The technique is fast and robust enough that we deployed it in a production game engine used at Vicarious Visions.
本文利用离线随机重建的思想,提出了一种实时模拟运动模糊现象的滤波器。过滤器作为一个2D后处理在一个传统的帧缓冲器与屏幕空间速度缓冲器增强。我们展示了在NVIDIA GeForce 480和Xbox 360平台上实时渲染和重建视频游戏场景的结果,并表明相同的过滤器可以应用于离线渲染图像和真实照片的电影后期处理。这项技术足够快速和强大,我们将其部署到Vicarious Visions使用的生产游戏引擎中。
{"title":"A reconstruction filter for plausible motion blur","authors":"M. McGuire, P. Hennessy, Michał Bukowski, Brian Osman","doi":"10.1145/2159616.2159639","DOIUrl":"https://doi.org/10.1145/2159616.2159639","url":null,"abstract":"This paper describes a novel filter for simulating motion blur phenomena in real time by applying ideas from offline stochastic reconstruction. The filter operates as a 2D post-process on a conventional framebuffer augmented with a screen-space velocity buffer. We demonstrate results on video game scenes rendered and reconstructed in real-time on NVIDIA GeForce 480 and Xbox 360 platforms, and show that the same filter can be applied to cinematic post-processing of offline-rendered images and real photographs. The technique is fast and robust enough that we deployed it in a production game engine used at Vicarious Visions.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"2011 1","pages":"135-142"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86351135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Ray tracing visualization toolkit 光线追踪可视化工具包
Pub Date : 2012-03-09 DOI: 10.1145/2159616.2159628
C. Gribble, J. Fisher, Daniel Eby, E. Quigley, Gideon Ludwig
The Ray Tracing Visualization Toolkit (rtVTK) is a collection of programming and visualization tools supporting visual analysis of ray-based rendering algorithms. rtVTK leverages layered visualization within the spatial domain of computation, enabling investigators to explore the computational elements of any ray-based renderer. Renderers utilize a library for recording and processing ray state, and a configurable pipeline of loosely coupled components allows run-time control of the resulting visualization. rtVTK enhances tasks in development, education, and analysis by enabling users to interact with a visual representation of ray tracing computations.
Ray Tracing Visualization Toolkit (rtVTK)是一个支持基于光线渲染算法的可视化分析的编程和可视化工具的集合。rtVTK利用计算空间域内的分层可视化,使研究人员能够探索任何基于光线的渲染器的计算元素。渲染器利用一个库来记录和处理光线状态,一个由松散耦合组件组成的可配置管道允许对结果可视化进行运行时控制。rtVTK通过使用户能够与光线追踪计算的可视化表示进行交互,从而增强了开发、教育和分析中的任务。
{"title":"Ray tracing visualization toolkit","authors":"C. Gribble, J. Fisher, Daniel Eby, E. Quigley, Gideon Ludwig","doi":"10.1145/2159616.2159628","DOIUrl":"https://doi.org/10.1145/2159616.2159628","url":null,"abstract":"The Ray Tracing Visualization Toolkit (rtVTK) is a collection of programming and visualization tools supporting visual analysis of ray-based rendering algorithms. rtVTK leverages layered visualization within the spatial domain of computation, enabling investigators to explore the computational elements of any ray-based renderer. Renderers utilize a library for recording and processing ray state, and a configurable pipeline of loosely coupled components allows run-time control of the resulting visualization. rtVTK enhances tasks in development, education, and analysis by enabling users to interact with a visual representation of ray tracing computations.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"1 1","pages":"71-78"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91345193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A framework for rendering complex scattering effects on hair 一个用于渲染头发上复杂散射效果的框架
Pub Date : 2012-03-09 DOI: 10.1145/2159616.2159635
Xuan Yu, Jason C. Yang, J. Hensley, T. Harada, Jingyi Yu
The appearance of hair plays a critical role in synthesizing realistic looking human characters. However, due to the high complexity in hair geometry and the scattering nature of hair fibers, rendering hair with photorealistic quality and at interactive speeds remains as an open problem in computer graphics. Previous approaches attempt to simplify the scattering model to only tackle a specific aspect of the scattering effects. In this paper, we present a new approach to simultaneously render complex scattering effects including volumetric shadows, transparency, and antialiasing under a unified framework. Our solution uses a shadow-ray path to produce volumetric self-shadows and an additional view-ray path to produce transparency. To compute and accumulate the contribution of individual hair fibers along each (shadow or view) path, we develop a new GPU-based k-buffer technique that can efficiently locate the K nearest scattering locations and combine them in the correct order. Compared with existing multi-layer based approaches[Kim and Neumann 2001; Yuksel and Keyser 2008; Sintorn and Assarsson 2009], we show that our k-buffer solution can more accurately reproduce the shadowing and transparency effects. Further, we present an anti-aliasing scheme that directly builds upon the k-buffer. We implement all three effects (volumetric shadows, transparency, and anti-aliasing) under a unified rendering pipeline. Experiments on complex hair models demonstrate that our new solution produces near photorealistic hair rendering at very interactive speed.
头发的外观在合成逼真的人类角色中起着至关重要的作用。然而,由于头发几何结构的高度复杂性和头发纤维的散射特性,在计算机图形学中,以逼真的质量和交互速度渲染头发仍然是一个悬而未决的问题。以前的方法试图简化散射模型,只处理散射效应的一个特定方面。在本文中,我们提出了一种在统一框架下同时渲染复杂散射效果的新方法,包括体积阴影、透明度和抗混叠。我们的解决方案使用阴影-光线路径来产生体积自阴影,并使用额外的视图-光线路径来产生透明度。为了计算和累积每个(阴影或视图)路径上单个头发纤维的贡献,我们开发了一种新的基于gpu的K -buffer技术,该技术可以有效地定位K个最近的散射位置,并以正确的顺序将它们组合起来。与现有的基于多层的方法相比[Kim and Neumann 2001;Yuksel and Keyser 2008;Sintorn和Assarsson 2009],我们表明我们的k缓冲溶液可以更准确地再现阴影和透明度效果。此外,我们提出了一种直接建立在k-buffer上的抗混叠方案。我们在一个统一的渲染管道下实现所有三种效果(体积阴影,透明度和抗锯齿)。在复杂头发模型上的实验表明,我们的新解决方案以非常高的交互速度产生接近照片的头发渲染。
{"title":"A framework for rendering complex scattering effects on hair","authors":"Xuan Yu, Jason C. Yang, J. Hensley, T. Harada, Jingyi Yu","doi":"10.1145/2159616.2159635","DOIUrl":"https://doi.org/10.1145/2159616.2159635","url":null,"abstract":"The appearance of hair plays a critical role in synthesizing realistic looking human characters. However, due to the high complexity in hair geometry and the scattering nature of hair fibers, rendering hair with photorealistic quality and at interactive speeds remains as an open problem in computer graphics. Previous approaches attempt to simplify the scattering model to only tackle a specific aspect of the scattering effects. In this paper, we present a new approach to simultaneously render complex scattering effects including volumetric shadows, transparency, and antialiasing under a unified framework. Our solution uses a shadow-ray path to produce volumetric self-shadows and an additional view-ray path to produce transparency. To compute and accumulate the contribution of individual hair fibers along each (shadow or view) path, we develop a new GPU-based k-buffer technique that can efficiently locate the K nearest scattering locations and combine them in the correct order. Compared with existing multi-layer based approaches[Kim and Neumann 2001; Yuksel and Keyser 2008; Sintorn and Assarsson 2009], we show that our k-buffer solution can more accurately reproduce the shadowing and transparency effects. Further, we present an anti-aliasing scheme that directly builds upon the k-buffer. We implement all three effects (volumetric shadows, transparency, and anti-aliasing) under a unified rendering pipeline. Experiments on complex hair models demonstrate that our new solution produces near photorealistic hair rendering at very interactive speed.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"114 1","pages":"111-118"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89923743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
4D parametric motion graphs for interactive animation 用于交互动画的4D参数化运动图形
Pub Date : 2012-03-09 DOI: 10.1145/2159616.2159633
D. Casas, M. Tejera, Jean-Yves Guillemaut, A. Hilton
A 4D parametric motion graph representation is presented for interactive animation from actor performance capture in a multiple camera studio. The representation is based on a 4D model database of temporally aligned mesh sequence reconstructions for multiple motions. High-level movement controls such as speed and direction are achieved by blending multiple mesh sequences of related motions. A real-time mesh sequence blending approach is introduced which combines the realistic deformation of previous non-linear solutions with efficient online computation. Transitions between different parametric motion spaces are evaluated in real-time based on surface shape and motion similarity. 4D parametric motion graphs allow real-time interactive character animation while preserving the natural dynamics of the captured performance.
一个4D参数运动图形表示提出了交互式动画从演员表演捕捉在一个多摄像机工作室。该表示基于一个4D模型数据库,该数据库是多个运动的时间对齐网格序列重建。高级运动控制,如速度和方向是通过混合相关运动的多个网格序列来实现的。提出了一种将以往非线性解的真实变形与高效的在线计算相结合的实时网格序列混合方法。基于曲面形状和运动相似度实时评估不同参数运动空间之间的过渡。4D参数化运动图形允许实时交互角色动画,同时保留捕获性能的自然动态。
{"title":"4D parametric motion graphs for interactive animation","authors":"D. Casas, M. Tejera, Jean-Yves Guillemaut, A. Hilton","doi":"10.1145/2159616.2159633","DOIUrl":"https://doi.org/10.1145/2159616.2159633","url":null,"abstract":"A 4D parametric motion graph representation is presented for interactive animation from actor performance capture in a multiple camera studio. The representation is based on a 4D model database of temporally aligned mesh sequence reconstructions for multiple motions. High-level movement controls such as speed and direction are achieved by blending multiple mesh sequences of related motions. A real-time mesh sequence blending approach is introduced which combines the realistic deformation of previous non-linear solutions with efficient online computation. Transitions between different parametric motion spaces are evaluated in real-time based on surface shape and motion similarity. 4D parametric motion graphs allow real-time interactive character animation while preserving the natural dynamics of the captured performance.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"28 1","pages":"103-110"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73394907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Animal reality 动物的现实
Pub Date : 2012-03-09 DOI: 10.1145/2159616.2159651
A. Sherstyuk
Life on Earth has many forms and every life form has its own version of reality, as reflected in the eyes of the viewer. These worlds are as real as the one that we know and all of them are equally fascinating. The multiverse of such "animal realities" can be explored in Virtual Reality, as described in this concept work.
地球上的生命有许多形式,每一种生命形式都有自己的现实版本,就像观察者的眼睛所反映的那样。这些世界和我们所知道的世界一样真实,它们都同样迷人。这种“动物现实”的多元宇宙可以在虚拟现实中探索,正如这个概念作品所描述的那样。
{"title":"Animal reality","authors":"A. Sherstyuk","doi":"10.1145/2159616.2159651","DOIUrl":"https://doi.org/10.1145/2159616.2159651","url":null,"abstract":"Life on Earth has many forms and every life form has its own version of reality, as reflected in the eyes of the viewer. These worlds are as real as the one that we know and all of them are equally fascinating. The multiverse of such \"animal realities\" can be explored in Virtual Reality, as described in this concept work.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"25 1","pages":"205"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86563330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1