Christian Luksch, R. Tobler, R. Habel, M. Schwärzler, M. Wimmer
We propose a new method for the fast computation of light maps using a many-light global-illumination solution. A complete scene can be light mapped on the order of seconds to minutes, allowing fast and consistent previews for editing or even generation at loading time. In our method, virtual point lights are clustered into a set of virtual polygon lights, which represent a compact description of the illumination in the scene. The actual light-map generation is performed directly on the GPU. Our approach degrades gracefully, avoiding objectionable artifacts even for very short computation times.
{"title":"Fast light-map computation with virtual polygon lights","authors":"Christian Luksch, R. Tobler, R. Habel, M. Schwärzler, M. Wimmer","doi":"10.1145/2448196.2448210","DOIUrl":"https://doi.org/10.1145/2448196.2448210","url":null,"abstract":"We propose a new method for the fast computation of light maps using a many-light global-illumination solution. A complete scene can be light mapped on the order of seconds to minutes, allowing fast and consistent previews for editing or even generation at loading time. In our method, virtual point lights are clustered into a set of virtual polygon lights, which represent a compact description of the illumination in the scene. The actual light-map generation is performed directly on the GPU. Our approach degrades gracefully, avoiding objectionable artifacts even for very short computation times.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"70 1","pages":"87-94"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82563231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Physically simulating non-rigid virtual objects which can deform or break apart within their environments is now common in state-of-the-art virtual simulations such as video games or surgery simulations. Real-time performance requires a physical model which provides an approximation to the true solution for fast computations but at the same time conveys enough believability of the simulation to the user. By embedding a complex surface mesh within simpler physical geometry, the mesh complexity can be separated from the algorithmic complexity of the physical simulation. Embedding methods have been successful in production quality products (e.g. [Parker and O'Brien 2009]). In the presence of fracture it is still unclear how to derive the graphical representation of a solid object defined only as a surface mesh with no volume information.
{"title":"Physical simulation of an embedded surface mesh involving deformation and fracture","authors":"B. Clack, J. Keyser","doi":"10.1145/2448196.2448237","DOIUrl":"https://doi.org/10.1145/2448196.2448237","url":null,"abstract":"Physically simulating non-rigid virtual objects which can deform or break apart within their environments is now common in state-of-the-art virtual simulations such as video games or surgery simulations. Real-time performance requires a physical model which provides an approximation to the true solution for fast computations but at the same time conveys enough believability of the simulation to the user. By embedding a complex surface mesh within simpler physical geometry, the mesh complexity can be separated from the algorithmic complexity of the physical simulation. Embedding methods have been successful in production quality products (e.g. [Parker and O'Brien 2009]). In the presence of fracture it is still unclear how to derive the graphical representation of a solid object defined only as a surface mesh with no volume information.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"69 1","pages":"189"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83608242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qian Sun, Long Zhang, Minqi Zhang, Xiang Ying, Shiqing Xin, Jiazhi Xia, Ying He
This paper presents Texture Brush, an interactive interface for texturing 3D surfaces. We extend the conventional exponential map to a more general setting, in which the generator can be an arbitrary curve. Based on our extended exponential map, we develop a local parameterization method which naturally supports anisotropic texture mapping. With Texture Brush, the user can easily specify such local parameterization with a free-form stroke on the surface. We also propose a set of intuitive operations which are mainly based on 3D painting metaphor, including texture painting, texture cloning, texture animation design, and texture editing. Compared to the existing surface texturing techniques, our method enables a smoother and more natural work flow so that the user can focus on the design task itself without switching back and forth among different tools or stages. The encouraging experimental results and positive evaluation by artists demonstrate the efficacy of our Texture Brush for interactive texture mapping.
{"title":"Texture brush: an interactive surface texturing interface","authors":"Qian Sun, Long Zhang, Minqi Zhang, Xiang Ying, Shiqing Xin, Jiazhi Xia, Ying He","doi":"10.1145/2448196.2448221","DOIUrl":"https://doi.org/10.1145/2448196.2448221","url":null,"abstract":"This paper presents Texture Brush, an interactive interface for texturing 3D surfaces. We extend the conventional exponential map to a more general setting, in which the generator can be an arbitrary curve. Based on our extended exponential map, we develop a local parameterization method which naturally supports anisotropic texture mapping. With Texture Brush, the user can easily specify such local parameterization with a free-form stroke on the surface. We also propose a set of intuitive operations which are mainly based on 3D painting metaphor, including texture painting, texture cloning, texture animation design, and texture editing. Compared to the existing surface texturing techniques, our method enables a smoother and more natural work flow so that the user can focus on the design task itself without switching back and forth among different tools or stages. The encouraging experimental results and positive evaluation by artists demonstrate the efficacy of our Texture Brush for interactive texture mapping.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"1 1","pages":"153-160"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87065030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Schwärzler, Christian Luksch, D. Scherzer, M. Wimmer
We propose a novel way to efficiently calculate soft shadows in real-time applications by overcoming the high computational effort involved with the complex corresponding visibility estimation each frame: We exploit the temporal coherence prevalent in typical scene movement, making the estimation of a new shadow value only necessary whenever regions are newly disoccluded due to camera adjustment, or the shadow situation changes due to object movement. By extending the typical shadow mapping algorithm by an additional light-weight buffer for the tracking of dynamic scene objects, we can robustly and efficiently detect all screen space fragments that need to be updated, including not only the moving objects themselves, but also the soft shadows they cast. By applying this strategy to the popular Percentage Closer Soft Shadow algorithm (PCSS), we double rendering performance in scenes with both static and dynamic objects -- as prevalent in various 3D game levels -- while maintaining the visual quality of the original approach.
{"title":"Fast percentage closer soft shadows using temporal coherence","authors":"M. Schwärzler, Christian Luksch, D. Scherzer, M. Wimmer","doi":"10.1145/2448196.2448209","DOIUrl":"https://doi.org/10.1145/2448196.2448209","url":null,"abstract":"We propose a novel way to efficiently calculate soft shadows in real-time applications by overcoming the high computational effort involved with the complex corresponding visibility estimation each frame: We exploit the temporal coherence prevalent in typical scene movement, making the estimation of a new shadow value only necessary whenever regions are newly disoccluded due to camera adjustment, or the shadow situation changes due to object movement. By extending the typical shadow mapping algorithm by an additional light-weight buffer for the tracking of dynamic scene objects, we can robustly and efficiently detect all screen space fragments that need to be updated, including not only the moving objects themselves, but also the soft shadows they cast. By applying this strategy to the popular Percentage Closer Soft Shadow algorithm (PCSS), we double rendering performance in scenes with both static and dynamic objects -- as prevalent in various 3D game levels -- while maintaining the visual quality of the original approach.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"44 1","pages":"79-86"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89893662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In computer graphics, textures represent the detail appearance of the surface of objects, such as colors and patterns. Example-based texture synthesis is to construct a larger visual pattern from a small example texture image. In this paper, we present a simple and efficient method which can synthesize a large scale texture in real-time based on a given example texture by simply tiling and deforming the example texture. Different from most of the existing techniques, our method does not perform search operation and it can compute texture values at any given points (random access). In addition, our method requires small storage which is only to store one example texture. Our method is suitable for synthesizing irregular and near-stochastic texture. We also propose methods to efficiently synthesize and map 3D solid textures on 3D meshes.
{"title":"Simple and efficient example-based texture synthesis using tiling and deformation","authors":"Kan Chen, H. Johan, W. Müller-Wittig","doi":"10.1145/2448196.2448219","DOIUrl":"https://doi.org/10.1145/2448196.2448219","url":null,"abstract":"In computer graphics, textures represent the detail appearance of the surface of objects, such as colors and patterns. Example-based texture synthesis is to construct a larger visual pattern from a small example texture image. In this paper, we present a simple and efficient method which can synthesize a large scale texture in real-time based on a given example texture by simply tiling and deforming the example texture. Different from most of the existing techniques, our method does not perform search operation and it can compute texture values at any given points (random access). In addition, our method requires small storage which is only to store one example texture. Our method is suitable for synthesizing irregular and near-stochastic texture. We also propose methods to efficiently synthesize and map 3D solid textures on 3D meshes.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"14 1","pages":"145-152"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75811766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Line drawings are a popular shape depiction technique due to its capability to express meaningful information by ignoring less important or distracting details. Many computer generated line drawing algorithms have been proposed in the past decade, such as suggestive contours, ridge-valley lines, apparent ridges, photic extremum lines, demarcating curves, Laplacian lines, just name a few.
{"title":"Splatting lines for 3D mesh illustration","authors":"Qian Sun, Long Zhang, Ying He","doi":"10.1145/2448196.2448241","DOIUrl":"https://doi.org/10.1145/2448196.2448241","url":null,"abstract":"Line drawings are a popular shape depiction technique due to its capability to express meaningful information by ignoring less important or distracting details. Many computer generated line drawing algorithms have been proposed in the past decade, such as suggestive contours, ridge-valley lines, apparent ridges, photic extremum lines, demarcating curves, Laplacian lines, just name a few.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"10 1","pages":"193"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74310557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the introduction of the Nintendo Wii, Playstation Move, and Microsoft Kinect, gaming and virtual reality technologies have begun to merge. These technologies have enabled low-cost, more-natural interaction with games and virtual environments (VEs). However, the sense of touch is usually missing. Interacting with virtual objects often means holding a hand in the air, which can be tiring if done for long. I present a technique to turn simple objects into haptic surfaces for virtual objects, extending earlier work on Redirected Touching [Kohli et al. 2012].
随着任天堂Wii、Playstation Move和微软Kinect的推出,游戏和虚拟现实技术开始融合。这些技术使人们能够与游戏和虚拟环境(ve)进行低成本、更自然的交互。然而,触觉通常是缺失的。与虚拟物体互动通常意味着把手举在空中,如果长时间这样做会很累。我提出了一种将简单对象转换为虚拟对象的触觉表面的技术,扩展了早期在重定向触摸方面的工作[Kohli et al. 2012]。
{"title":"Warping virtual space for low-cost haptic feedback","authors":"Luv Kohli","doi":"10.1145/2448196.2448243","DOIUrl":"https://doi.org/10.1145/2448196.2448243","url":null,"abstract":"With the introduction of the Nintendo Wii, Playstation Move, and Microsoft Kinect, gaming and virtual reality technologies have begun to merge. These technologies have enabled low-cost, more-natural interaction with games and virtual environments (VEs). However, the sense of touch is usually missing. Interacting with virtual objects often means holding a hand in the air, which can be tiring if done for long. I present a technique to turn simple objects into haptic surfaces for virtual objects, extending earlier work on Redirected Touching [Kohli et al. 2012].","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"21 1","pages":"195"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79369289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High-quality indirect lighting at interactive speed is a difficult challenge. To fast approximate the indirect illumination, the volume-based rendering techniques were used. The Light Propagation Volume (LPV) [Kaplanyan et al. 2010] method departs scenes into coarse lattices and propagates spherical harmonics represented radiance on them. The LPV is able to render complex and dynamic scenes in real-time. But since the radiance transfer is highly approximated among the lattices, it fails to simulate the indirect lighting between the surfaces in the same lattice. On the other hand, the Voxel-based Global Illumination (VGI) [Kaplanyan et al. 2011] voxelizes the scene into fine voxels. It performs the voxel-based ray marching to find the reflected surface in the near-field. By adopting a 1/4x1/4 sampling, the VGI can render in a speed of 18~28 frame per second. But for complex scenes and real global illumination, it requires huge volume data and a large number of rays which degrades the rendering performance to a speed of 2.2s per frame.
高质量的间接照明在交互速度是一个困难的挑战。为了快速逼近间接照明,采用了基于体的渲染技术。光传播体积(Light Propagation Volume, LPV) [Kaplanyan et al. 2010]方法将场景划分为粗网格,并在其上传播代表辐射的球面谐波。LPV能够实时渲染复杂的动态场景。但是由于在晶格之间的辐射传递是高度近似的,它不能模拟同一晶格中表面之间的间接照明。另一方面,基于体素的全局照明(VGI) [Kaplanyan et al. 2011]将场景体素化为精细体素。它执行基于体素的射线推进来寻找近场的反射面。通过采用1/4x1/4采样,VGI可以以18~28帧/秒的速度进行渲染。但是对于复杂的场景和真实的全局照明,它需要大量的体数据和大量的光线,这会降低渲染性能到每帧2.2s的速度。
{"title":"Volume-based indirect lighting with irradiance decomposition","authors":"Ruirui Li, K. Qin","doi":"10.1145/2448196.2448242","DOIUrl":"https://doi.org/10.1145/2448196.2448242","url":null,"abstract":"High-quality indirect lighting at interactive speed is a difficult challenge. To fast approximate the indirect illumination, the volume-based rendering techniques were used. The Light Propagation Volume (LPV) [Kaplanyan et al. 2010] method departs scenes into coarse lattices and propagates spherical harmonics represented radiance on them. The LPV is able to render complex and dynamic scenes in real-time. But since the radiance transfer is highly approximated among the lattices, it fails to simulate the indirect lighting between the surfaces in the same lattice. On the other hand, the Voxel-based Global Illumination (VGI) [Kaplanyan et al. 2011] voxelizes the scene into fine voxels. It performs the voxel-based ray marching to find the reflected surface in the near-field. By adopting a 1/4x1/4 sampling, the VGI can render in a speed of 18~28 frame per second. But for complex scenes and real global illumination, it requires huge volume data and a large number of rays which degrades the rendering performance to a speed of 2.2s per frame.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"155 1","pages":"194"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87856971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite increasing popularity of stereo capture and display systems, creating stereo artwork remains a challenge. This paper presents a stereo painting system, which enables effective from-scratch creation of high-quality stereo artwork. A key concept of our system is a stereo layer, which is composed of two RGBAd (RGBA + depth) buffers. Stereo layers alleviate the need for fully formed representational 3D geometry required by most existing 3D painting systems, and allow for simple, essential depth specification. RGBAd buffers also provide scalability for complex scenes by minimizing the dependency of stereo painting updates on the scene complexity. For interaction with stereo layers, we present stereo paint and stereo depth brushes, which manipulate the photometric (RGBA) and depth buffers of a stereo layer, respectively. In our system, painting and depth manipulation operations can be performed in arbitrary order with real-time visual feedback, providing a flexible WYSIWYG workflow for stereo painting. Comments from artists and experimental results demonstrate that our system effectively aides in the creation of compelling stereo paintings.
{"title":"WYSIWYG stereo painting","authors":"Yongjin Kim, H. Winnemöller, Seungyong Lee","doi":"10.1145/2448196.2448223","DOIUrl":"https://doi.org/10.1145/2448196.2448223","url":null,"abstract":"Despite increasing popularity of stereo capture and display systems, creating stereo artwork remains a challenge. This paper presents a stereo painting system, which enables effective from-scratch creation of high-quality stereo artwork. A key concept of our system is a stereo layer, which is composed of two RGBAd (RGBA + depth) buffers. Stereo layers alleviate the need for fully formed representational 3D geometry required by most existing 3D painting systems, and allow for simple, essential depth specification. RGBAd buffers also provide scalability for complex scenes by minimizing the dependency of stereo painting updates on the scene complexity. For interaction with stereo layers, we present stereo paint and stereo depth brushes, which manipulate the photometric (RGBA) and depth buffers of a stereo layer, respectively. In our system, painting and depth manipulation operations can be performed in arbitrary order with real-time visual feedback, providing a flexible WYSIWYG workflow for stereo painting. Comments from artists and experimental results demonstrate that our system effectively aides in the creation of compelling stereo paintings.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"10 1","pages":"169-176"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84093708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Art is a creative process. Can it possibly be supported by a computational tool? Flagg et al. [Flagg et al. 2006] proposed that capture and access technology can provide a key form of computational support for the creative process. Following the idea, a projector-based sculpture guiding system was constructed. The system can scan 3D structures, compare the difference and display the information on the physical surface. The system consists of a projector and a camera. The camera is the Point Grey Chameleon USB camera. The resolution of the image is 1280x960. The projector is the NEC M300x which a resolution of 1024x768. To keep a fixed relationship between the projector and the camera, the camera is mounted on the projector.
艺术是一个创造的过程。它能被计算工具支持吗?Flagg et al. [Flagg et al. 2006]提出,捕获和访问技术可以为创造性过程提供关键形式的计算支持。按照这个思路,我们构建了一个基于投影仪的雕塑导向系统。该系统可以对三维结构进行扫描,比较差异,并在物理表面上显示信息。该系统由一台投影仪和一台摄像机组成。相机是点灰变色龙USB相机。图像分辨率为1280x960。投影仪是NEC M300x,分辨率为1024x768。为了保证投影仪和摄像机之间的固定关系,摄像机安装在投影仪上。
{"title":"From 3D to reality: projector-based sculpture assistance system","authors":"Fu-Che Wu","doi":"10.1145/2448196.2448234","DOIUrl":"https://doi.org/10.1145/2448196.2448234","url":null,"abstract":"Art is a creative process. Can it possibly be supported by a computational tool? Flagg et al. [Flagg et al. 2006] proposed that capture and access technology can provide a key form of computational support for the creative process. Following the idea, a projector-based sculpture guiding system was constructed. The system can scan 3D structures, compare the difference and display the information on the physical surface. The system consists of a projector and a camera. The camera is the Point Grey Chameleon USB camera. The resolution of the image is 1280x960. The projector is the NEC M300x which a resolution of 1024x768. To keep a fixed relationship between the projector and the camera, the camera is mounted on the projector.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"75 1","pages":"186"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86327157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}