We present an interactive virtual percussion instrument system, Tabletop Ensemble, that can be used by a group of collaborative users simultaneously to emulate playing music in real world while providing them with flexibility of virtual simulations. An optical multi-touch tabletop serves as the input device. A novel touch handling algorithm for such devices is presented to translate users' interactions into percussive control signals appropriate for music playing. These signals activate the proposed sound simulation system for generating realistic user-controlled musical sounds. A fast physically-based sound synthesis technique, modal synthesis, is adopted to enable users to directly produce rich, varying musical tones, as they would with the real percussion instruments. In addition, we propose a simple coupling scheme for modulating the synthesized sounds by an accurate numerical acoustic simulator to create believable acoustic effects due to cavity in music instruments. This paradigm allows creating new virtual percussion instruments of various materials, shapes, and sizes with little overhead. We believe such an interactive, multi-modal system would offer capabilities for expressive music playing, rapid prototyping of virtual instruments, and active exploration of sound effects determined by various physical parameters in a classroom, museum, or other educational settings. Virtual xylophones and drums with various physics properties are shown in the presented system.
{"title":"Tabletop Ensemble: touch-enabled virtual percussion instruments","authors":"Zhimin Ren, Ravish Mehra, Jason Coposky, M. Lin","doi":"10.1145/2159616.2159618","DOIUrl":"https://doi.org/10.1145/2159616.2159618","url":null,"abstract":"We present an interactive virtual percussion instrument system, Tabletop Ensemble, that can be used by a group of collaborative users simultaneously to emulate playing music in real world while providing them with flexibility of virtual simulations. An optical multi-touch tabletop serves as the input device. A novel touch handling algorithm for such devices is presented to translate users' interactions into percussive control signals appropriate for music playing. These signals activate the proposed sound simulation system for generating realistic user-controlled musical sounds. A fast physically-based sound synthesis technique, modal synthesis, is adopted to enable users to directly produce rich, varying musical tones, as they would with the real percussion instruments. In addition, we propose a simple coupling scheme for modulating the synthesized sounds by an accurate numerical acoustic simulator to create believable acoustic effects due to cavity in music instruments. This paradigm allows creating new virtual percussion instruments of various materials, shapes, and sizes with little overhead. We believe such an interactive, multi-modal system would offer capabilities for expressive music playing, rapid prototyping of virtual instruments, and active exploration of sound effects determined by various physical parameters in a classroom, museum, or other educational settings. Virtual xylophones and drums with various physics properties are shown in the presented system.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"40 1","pages":"7-14"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85333829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A curved or higher-order surface, such as spline patch or a Bézier patch, is rendered pixel-accurate if it displays neither polyhedral artifacts nor parametric distortion. This paper shows how to set the evaluation density for a patch just finely enough so that parametric surfaces render pixel-accurate in the standard graphics pipeline. The approach uses tight estimates, not of the size under screen-projection, but of the variance under screen projection between the exact surface and its triangulation. An implementation, using the GPU tessellation engine, runs at interactive rates comparable to standard rendering.
{"title":"Efficient pixel-accurate rendering of curved surfaces","authors":"Young In Yeo, Lihan Bin, J. Peters","doi":"10.1145/2159616.2159644","DOIUrl":"https://doi.org/10.1145/2159616.2159644","url":null,"abstract":"A curved or higher-order surface, such as spline patch or a Bézier patch, is rendered pixel-accurate if it displays neither polyhedral artifacts nor parametric distortion. This paper shows how to set the evaluation density for a patch just finely enough so that parametric surfaces render pixel-accurate in the standard graphics pipeline. The approach uses tight estimates, not of the size under screen-projection, but of the variance under screen projection between the exact surface and its triangulation. An implementation, using the GPU tessellation engine, runs at interactive rates comparable to standard rendering.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"24 1","pages":"165-174"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83223052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Global illumination drastically improves visual realism of interactive applications. Although many interactive techniques are available, they have some limitations or employ coarse approximations. For example, general instant radiosity often has numerical error, because the sampling strategy fails in some cases. This problem can be reduced by a bidirectional sampling strategy that is often used in off-line rendering. However, it has been complicated to implement in real-time applications. This paper presents a simple real-time global illumination system based on bidirectional path tracing. The proposed system approximates bidirectional path tracing by using rasterization on a commodity DirectX® 11 capable GPU. Moreover, for glossy surfaces, a simple and efficient artifact suppression technique is also introduced.
{"title":"Real-time bidirectional path tracing via rasterization","authors":"Yusuke Tokuyoshi, Shinji Ogaki","doi":"10.1145/2159616.2159647","DOIUrl":"https://doi.org/10.1145/2159616.2159647","url":null,"abstract":"Global illumination drastically improves visual realism of interactive applications. Although many interactive techniques are available, they have some limitations or employ coarse approximations. For example, general instant radiosity often has numerical error, because the sampling strategy fails in some cases. This problem can be reduced by a bidirectional sampling strategy that is often used in off-line rendering. However, it has been complicated to implement in real-time applications. This paper presents a simple real-time global illumination system based on bidirectional path tracing. The proposed system approximates bidirectional path tracing by using rasterization on a commodity DirectX® 11 capable GPU. Moreover, for glossy surfaces, a simple and efficient artifact suppression technique is also introduced.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"32 1","pages":"183-190"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78832672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ray tracing algorithm is known primarily for producing highly realistic images, but also for its high computational cost. Perhaps the most effective method for accelerating ray tracing is through the use of spatial index structures, such as kd-trees and bounding volume hierarchies. In highly dynamic scenarios, these structures must be rebuilt frequently, constituting a considerable portion of the total time to image.
{"title":"Hardware accelerated construction of SAH-based bounding volume hierarchies for interactive ray tracing","authors":"Michael J. Doyle, Colin Fowler, M. Manzke","doi":"10.1145/2159616.2159655","DOIUrl":"https://doi.org/10.1145/2159616.2159655","url":null,"abstract":"The ray tracing algorithm is known primarily for producing highly realistic images, but also for its high computational cost. Perhaps the most effective method for accelerating ray tracing is through the use of spatial index structures, such as kd-trees and bounding volume hierarchies. In highly dynamic scenarios, these structures must be rebuilt frequently, constituting a considerable portion of the total time to image.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"68 1","pages":"209"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77264666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present decoupled deferred shading: a rendering technique based on a new data structure called compact geometry buffer, which stores shading samples independently from the visibility. This enables caching and efficient reuse of shading computation, e.g. for stochastic rasterization techniques. In contrast to previous methods, our decoupled shading can be efficiently implemented on current graphics hardware. We describe two variants which differ in the way the shading samples are cached: the first maintains a single cache for the entire image in global memory, while the second pursues a tile-based approach leveraging local memory of the GPU's multiprocessors. We demonstrate the application of decoupled deferred shading to speed up the rendering in applications with stochastic supersampling, depth of field, and motion blur.
{"title":"Decoupled deferred shading for hardware rasterization","authors":"Gabor Liktor, C. Dachsbacher","doi":"10.1145/2159616.2159640","DOIUrl":"https://doi.org/10.1145/2159616.2159640","url":null,"abstract":"In this paper we present decoupled deferred shading: a rendering technique based on a new data structure called compact geometry buffer, which stores shading samples independently from the visibility. This enables caching and efficient reuse of shading computation, e.g. for stochastic rasterization techniques. In contrast to previous methods, our decoupled shading can be efficiently implemented on current graphics hardware. We describe two variants which differ in the way the shading samples are cached: the first maintains a single cache for the entire image in global memory, while the second pursues a tile-based approach leveraging local memory of the GPU's multiprocessors. We demonstrate the application of decoupled deferred shading to speed up the rendering in applications with stochastic supersampling, depth of field, and motion blur.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"8 1","pages":"143-150"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81595519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new fixed-rate texture compression scheme based on the energy compaction properties of the Discrete Wavelet Transform. Targeting existing commodity graphics hardware and APIs, our method is using the DXT compression formats to perform the quantization and storage of the wavelet transform coefficients, ensuring very fast decoding speeds. An optimization framework minimizes the quantization error of the coefficients and improves the overall compression quality. Our method provides a variety of low bitrate encoding modes for the compression of grayscale and color textures. These encoding modes offer either improved quality or reduced storage over the DXT1 format. Furthermore, anisotropic texture filtering is performed efficiently with the help of the native texture hardware. The decoding speed and the simplicity of the implementation make our approach well suited for use in games and other interactive applications.
{"title":"Texture compression using wavelet decomposition: a preview","authors":"P. Mavridis, Georgios Papaioannou","doi":"10.1145/2159616.2159664","DOIUrl":"https://doi.org/10.1145/2159616.2159664","url":null,"abstract":"We present a new fixed-rate texture compression scheme based on the energy compaction properties of the Discrete Wavelet Transform. Targeting existing commodity graphics hardware and APIs, our method is using the DXT compression formats to perform the quantization and storage of the wavelet transform coefficients, ensuring very fast decoding speeds. An optimization framework minimizes the quantization error of the coefficients and improves the overall compression quality. Our method provides a variety of low bitrate encoding modes for the compression of grayscale and color textures. These encoding modes offer either improved quality or reduced storage over the DXT1 format. Furthermore, anisotropic texture filtering is performed efficiently with the help of the native texture hardware. The decoding speed and the simplicity of the implementation make our approach well suited for use in games and other interactive applications.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"1 1","pages":"218"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76078835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In Virtual Environments (VE), users are often facing tasks that involve direct manipulation of virtual objects at close distances, such as touching, grabbing, placement. In immersive systems that employ head-mounted displays these tasks could be quite challenging, due to lack of convergence of virtual cameras. We present a mechanism that dynamically converges left and right cameras on target objects in VE. This mechanism simulates the natural process that takes place in real life automatically. As a result, the rendering system maintains optimal conditions for stereoscopic viewing of target objects at varying depths, in real time. Building on our previous work, which introduced the eye convergence algorithm [Sherstyuk and State 2010], we developed a Virtual Reality (VR) system and conducted an experimental study on effects of eye convergence in immersive VE. This paper gives the full description of the system, the study design and a detailed analysis of the results obtained.
在虚拟环境(VE)中,用户经常面临涉及近距离直接操作虚拟对象的任务,例如触摸、抓取、放置。在采用头戴式显示器的沉浸式系统中,由于缺乏虚拟摄像头的汇聚,这些任务可能相当具有挑战性。提出了一种VE中左右相机在目标物体上的动态收敛机制。这种机制自动模拟了现实生活中发生的自然过程。因此,渲染系统在不同深度实时保持目标物体的立体观察的最佳条件。在我们之前的工作基础上(该工作介绍了眼睛会聚算法[Sherstyuk and State 2010]),我们开发了一个虚拟现实(VR)系统,并对沉浸式VE中眼睛会聚的影响进行了实验研究。本文对系统进行了全面的描述、研究设计并对所获得的结果进行了详细的分析。
{"title":"Dynamic eye convergence for head-mounted displays improves user performance in virtual environments","authors":"A. Sherstyuk, Arindam Dey, C. Sandor, A. State","doi":"10.1145/2159616.2159620","DOIUrl":"https://doi.org/10.1145/2159616.2159620","url":null,"abstract":"In Virtual Environments (VE), users are often facing tasks that involve direct manipulation of virtual objects at close distances, such as touching, grabbing, placement. In immersive systems that employ head-mounted displays these tasks could be quite challenging, due to lack of convergence of virtual cameras.\u0000 We present a mechanism that dynamically converges left and right cameras on target objects in VE. This mechanism simulates the natural process that takes place in real life automatically. As a result, the rendering system maintains optimal conditions for stereoscopic viewing of target objects at varying depths, in real time.\u0000 Building on our previous work, which introduced the eye convergence algorithm [Sherstyuk and State 2010], we developed a Virtual Reality (VR) system and conducted an experimental study on effects of eye convergence in immersive VE. This paper gives the full description of the system, the study design and a detailed analysis of the results obtained.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"139 1","pages":"23-30"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74903135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rough refraction commonly occurs when light scatters on rough transparent surfaces. It presents a computational challenge, as every pixel's color depends on incoming light from numerous directions. De Rousiers et al. [2011] compute rough refraction interactively using a convolution of Gaussian normal and transmittance distribution functions (NDFs and BTDFs), but their work is limited to a constant roughness surfaces. We introduce two methods that allow for varying roughness by representing surface normals using LEAN mapping and Gaussian sum reduction (GSR).
{"title":"Real-time rough refraction via LEAN mapping and Gaussian sum reduction","authors":"Zeng Dai, Chris Wyman","doi":"10.1145/2159616.2159662","DOIUrl":"https://doi.org/10.1145/2159616.2159662","url":null,"abstract":"Rough refraction commonly occurs when light scatters on rough transparent surfaces. It presents a computational challenge, as every pixel's color depends on incoming light from numerous directions. De Rousiers et al. [2011] compute rough refraction interactively using a convolution of Gaussian normal and transmittance distribution functions (NDFs and BTDFs), but their work is limited to a constant roughness surfaces. We introduce two methods that allow for varying roughness by representing surface normals using LEAN mapping and Gaussian sum reduction (GSR).","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"93 1","pages":"216"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74104706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Loos, D. Nowrouzezahrai, Wojciech Jarosz, Peter-Pike J. Sloan
Modular Radiance Transfer (MRT) is a recent technique for computing approximate direct-to-indirect transport. Scenes are dynamically constructed by warping and connecting simple shapes and compact transport operators are only precomputed on these simple shapes. MRT ignores fine-scale transport from "clutter" objects inside the scene, and computes light transport with reduced dimensional operators, which allows extremely high performance but can lead to significant approximation error. We present several techniques to alleviate this limitation, allowing the light transport from clutter in a scene to be accounted for. We derive additional low-rank delta operators to compensate for these missing light transport paths by modeling indirect shadows and interreflections from, and onto, clutter objects in the scene. We retain MRT's scene-independent precomputation and augment its scene-dependent initialization with clutter transport generation, resulting in increased accuracy without a performance penalty. Our implementation is simple, requiring a few small matrix-vector multiplications that generate a delta lightmap added to MRT's output, and does not adversely affect the performance benefits of the overall algorithm.
{"title":"Delta radiance transfer","authors":"B. Loos, D. Nowrouzezahrai, Wojciech Jarosz, Peter-Pike J. Sloan","doi":"10.1145/2159616.2159648","DOIUrl":"https://doi.org/10.1145/2159616.2159648","url":null,"abstract":"Modular Radiance Transfer (MRT) is a recent technique for computing approximate direct-to-indirect transport. Scenes are dynamically constructed by warping and connecting simple shapes and compact transport operators are only precomputed on these simple shapes. MRT ignores fine-scale transport from \"clutter\" objects inside the scene, and computes light transport with reduced dimensional operators, which allows extremely high performance but can lead to significant approximation error. We present several techniques to alleviate this limitation, allowing the light transport from clutter in a scene to be accounted for. We derive additional low-rank delta operators to compensate for these missing light transport paths by modeling indirect shadows and interreflections from, and onto, clutter objects in the scene. We retain MRT's scene-independent precomputation and augment its scene-dependent initialization with clutter transport generation, resulting in increased accuracy without a performance penalty. Our implementation is simple, requiring a few small matrix-vector multiplications that generate a delta lightmap added to MRT's output, and does not adversely affect the performance benefits of the overall algorithm.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"1 1","pages":"191-196"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89756342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we explore the lossless compression of 32-bit floating-point buffers on graphics hardware. We first adapt a state-of-the-art 16-bit floating-point color and depth buffer compression scheme for operation on 32-bit data and propose two specific enhancements: dynamic bucket selection and a Fibonacci encoder. Next, we describe a unified codec for any type of floating-point buffer: color, depth, geometry, and GPGPU data. We also propose a method to further compress variable-precision data. Finally, we test our techniques on color, depth, and geometry buffers from existing applications. Using our enhancements to an existing technique, we have improved bandwidth savings by an average of 1.26x. Our unified codec achieved average bandwidth savings of 1.5x, 7.9x, and 2.9x for color (including buffers incompressible by past work), depth, and geometry buffers. Even higher savings were achieved when combined with our variable-precision technique, though specific ratios will depend on the tolerance of the application to reducing its precision.
{"title":"Lossless compression of variable-precision floating-point buffers on GPUs","authors":"Jeff Pool, A. Lastra, Montek Singh","doi":"10.1145/2159616.2159624","DOIUrl":"https://doi.org/10.1145/2159616.2159624","url":null,"abstract":"In this work, we explore the lossless compression of 32-bit floating-point buffers on graphics hardware. We first adapt a state-of-the-art 16-bit floating-point color and depth buffer compression scheme for operation on 32-bit data and propose two specific enhancements: dynamic bucket selection and a Fibonacci encoder. Next, we describe a unified codec for any type of floating-point buffer: color, depth, geometry, and GPGPU data. We also propose a method to further compress variable-precision data. Finally, we test our techniques on color, depth, and geometry buffers from existing applications. Using our enhancements to an existing technique, we have improved bandwidth savings by an average of 1.26x. Our unified codec achieved average bandwidth savings of 1.5x, 7.9x, and 2.9x for color (including buffers incompressible by past work), depth, and geometry buffers. Even higher savings were achieved when combined with our variable-precision technique, though specific ratios will depend on the tolerance of the application to reducing its precision.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"19 1","pages":"47-54"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78928335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}