In this paper we present a method to combine several micro-facet based surface layers into a single unified, expressive BRDF model that is easy to use. The restriction to micro-facet based layers constitutes no loss of generality, since both perfectly specular and perfectly diffuse surfaces can be seen as limit cases of the micro-facet approach. Such multi-layered surfaces can be used to re-create the appearance of a wide range of different materials, and yield good results without having to perform explicit sub-surface scattering computations. This is achieved through suitable approximations and simplifications of the scattering within the simulated layered surface, while still taking absorption and total internal reflection into account. We also discuss the corresponding probability distribution function that is needed for sampling purposes, and investigate how the flexibility of this new approach is best put to use.
{"title":"Arbitrarily layered micro-facet surfaces","authors":"A. Weidlich, A. Wilkie","doi":"10.1145/1321261.1321292","DOIUrl":"https://doi.org/10.1145/1321261.1321292","url":null,"abstract":"In this paper we present a method to combine several micro-facet based surface layers into a single unified, expressive BRDF model that is easy to use. The restriction to micro-facet based layers constitutes no loss of generality, since both perfectly specular and perfectly diffuse surfaces can be seen as limit cases of the micro-facet approach. Such multi-layered surfaces can be used to re-create the appearance of a wide range of different materials, and yield good results without having to perform explicit sub-surface scattering computations. This is achieved through suitable approximations and simplifications of the scattering within the simulated layered surface, while still taking absorption and total internal reflection into account. We also discuss the corresponding probability distribution function that is needed for sampling purposes, and investigate how the flexibility of this new approach is best put to use.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128951771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a qualitative evaluation of a number of free publicly available physics engines for simulation systems and game development. A brief overview of the aspects of a physics engine is presented accompanied by a comparison of the capabilities of each physics engine. Aspects that are investigated the accuracy and computational efficiency of the integrator properties, material properties, stacks, links, and collision detection system.
{"title":"Evaluation of real-time physics simulation systems","authors":"A. Boeing, T. Bräunl","doi":"10.1145/1321261.1321312","DOIUrl":"https://doi.org/10.1145/1321261.1321312","url":null,"abstract":"We present a qualitative evaluation of a number of free publicly available physics engines for simulation systems and game development. A brief overview of the aspects of a physics engine is presented accompanied by a comparison of the capabilities of each physics engine. Aspects that are investigated the accuracy and computational efficiency of the integrator properties, material properties, stacks, links, and collision detection system.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116493239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rhushabh Goradia, Anil Kanakanti, S. Chandran, A. Datta
Point-sampled geometry has gained significant interest due to their simplicity. The lack of connectivity touted as a plus, however, creates difficulties in many operations like generating global illumination effects. This becomes especially true when we have a complex scene consisting of several models. The data is often hard to segment as individual models and hence not suitable for surface reconstruction. Inter-reflections in such complex scenes requires knowledge of visibility between point pairs. Computing visibility for point models is all the more difficult (than for polygonal models), since we do not have any surface or object information. We present in this paper a novel, hierarchical, fast and memory efficient algorithm to compute a description of mutual visibility in the form of a visibility map. Ray shooting and visibility queries can be answered in sub-linear time using this data structure. We evaluate our scheme analytically, qualitatively, and quantitatively and conclude that these maps are desirable.
{"title":"Visibility map for global illumination in point clouds","authors":"Rhushabh Goradia, Anil Kanakanti, S. Chandran, A. Datta","doi":"10.1145/1321261.1321269","DOIUrl":"https://doi.org/10.1145/1321261.1321269","url":null,"abstract":"Point-sampled geometry has gained significant interest due to their simplicity. The lack of connectivity touted as a plus, however, creates difficulties in many operations like generating global illumination effects. This becomes especially true when we have a complex scene consisting of several models. The data is often hard to segment as individual models and hence not suitable for surface reconstruction.\u0000 Inter-reflections in such complex scenes requires knowledge of visibility between point pairs. Computing visibility for point models is all the more difficult (than for polygonal models), since we do not have any surface or object information. We present in this paper a novel, hierarchical, fast and memory efficient algorithm to compute a description of mutual visibility in the form of a visibility map. Ray shooting and visibility queries can be answered in sub-linear time using this data structure. We evaluate our scheme analytically, qualitatively, and quantitatively and conclude that these maps are desirable.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126936036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Developments in the graphics discipline called realistic image synthesis are in many ways related to the historical development of theories of light. And theories of light will probably continue to inspire the ongoing search for realism in graphics. To nurture this inspiration, we present the first in-depth, source-based historical study that pinpoints events with relevance for graphics in the development of theories of light. We also show that ancient mathematical models for light scattering phenomena may still find a use in the branch of realistic image synthesis concerned with real-time rendering. As an example we use Aristotle's theory of rainbow formation to construct a method for real-time rendering of rainbows. This example serves as an invitation to use the overview and references provided in this paper, not only for understanding where many of the physical concepts used in graphics come from, but also for finding more mathematical and physical models that are useful in graphics.
{"title":"The Aristotelian rainbow: from philosophy to computer graphics","authors":"J. Frisvad, Niels Jørgen Christensen, P. Falster","doi":"10.1145/1321261.1321282","DOIUrl":"https://doi.org/10.1145/1321261.1321282","url":null,"abstract":"Developments in the graphics discipline called realistic image synthesis are in many ways related to the historical development of theories of light. And theories of light will probably continue to inspire the ongoing search for realism in graphics. To nurture this inspiration, we present the first in-depth, source-based historical study that pinpoints events with relevance for graphics in the development of theories of light. We also show that ancient mathematical models for light scattering phenomena may still find a use in the branch of realistic image synthesis concerned with real-time rendering. As an example we use Aristotle's theory of rainbow formation to construct a method for real-time rendering of rainbows. This example serves as an invitation to use the overview and references provided in this paper, not only for understanding where many of the physical concepts used in graphics come from, but also for finding more mathematical and physical models that are useful in graphics.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"63 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120921589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work introduces a new technique for highly detailed, interactive liquid simulations. Similar to the mode-splitting method (used e.g. in oceanography), we separate the simulation of the low-frequency liquid flow and the high-frequency free surface waves. Hence, the performance for highly detailed liquid simulations can be increased immensely. Thereby, we use the 2D wave equation for surface simulation and the 3D Navier-Stokes equations for describing the liquid flow. Thus, the surface as well as the liquid flow are simulated physically-based, resulting in highly detailed and fully interactive liquid simulations: The liquid flows according to gravity, ground, obstacles and interactions, and the surface interacts with impacts, moving obstacles or simulates the propagation of highly detailed surface waves. Our method obtains realistic results at high framerates. Therefore, it is very suitable for today's video games, VR-Environments or medical simulators.
{"title":"Mode-splitting for highly detailed, interactive liquid simulation","authors":"H. Cords","doi":"10.1145/1321261.1321309","DOIUrl":"https://doi.org/10.1145/1321261.1321309","url":null,"abstract":"This work introduces a new technique for highly detailed, interactive liquid simulations. Similar to the mode-splitting method (used e.g. in oceanography), we separate the simulation of the low-frequency liquid flow and the high-frequency free surface waves. Hence, the performance for highly detailed liquid simulations can be increased immensely. Thereby, we use the 2D wave equation for surface simulation and the 3D Navier-Stokes equations for describing the liquid flow. Thus, the surface as well as the liquid flow are simulated physically-based, resulting in highly detailed and fully interactive liquid simulations: The liquid flows according to gravity, ground, obstacles and interactions, and the surface interacts with impacts, moving obstacles or simulates the propagation of highly detailed surface waves. Our method obtains realistic results at high framerates. Therefore, it is very suitable for today's video games, VR-Environments or medical simulators.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123793179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spherical harmonic (SH) lighting models require efficient and general libraries for evaluation of SH functions and of Wigner matrices for rotation. We introduce an efficient algebraic recurrence for evaluation of SH functions, and also implement SH rotation via Wigner matrices constructed for the real SH basis by a recurrence. Using these algorithms, we provide a freely distributable C / OpenGL implementation for SH diffuse unshadowed, shadowed and inter-reflected models. Our implementation allows flexible switching of scene, light probe, SH degree and lighting model at run time.
{"title":"Algorithms for spherical harmonic lighting","authors":"I. Lisle, Tracy Shih-lung Huang","doi":"10.1145/1321261.1321303","DOIUrl":"https://doi.org/10.1145/1321261.1321303","url":null,"abstract":"Spherical harmonic (SH) lighting models require efficient and general libraries for evaluation of SH functions and of Wigner matrices for rotation. We introduce an efficient algebraic recurrence for evaluation of SH functions, and also implement SH rotation via Wigner matrices constructed for the real SH basis by a recurrence. Using these algorithms, we provide a freely distributable C / OpenGL implementation for SH diffuse unshadowed, shadowed and inter-reflected models. Our implementation allows flexible switching of scene, light probe, SH degree and lighting model at run time.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129184116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image based Relighting(IBRL) has attracted a lot of interest in the computer graphics research, gaming, and virtual cinematography communities for its ability to relight objects or scenes, from novel illuminations captured in natural or synthetic environments. However, the advantages of an image-based framework conflicts with a drastic increase in the storage caused by the huge number of reference images pre-captured under various illumination conditions. To perform fast relighting, while maintaining the visual fidelity, one needs to preprocess this huge data into an appropriate model. In this paper, we propose a novel and efficient two-stage relighting algorithm which creates a compact representation of the huge IBRL dataset and facilitates fast relighting. In the first stage, using Singular Value Decomposition, a set of eigen image bases and relighting coefficients are computed. In the second stage, and in contrast to prior methods, the correlation among the relighting coefficients is harnessed using Spherical Harmonics. The proposed method thus has lower memory and computational requirements. We demonstrate our results qualitatively and quantitatively with new generated image data.
{"title":"Data-intensive image based relighting","authors":"Biswarup Choudhury, S. Chandran","doi":"10.1145/1321261.1321289","DOIUrl":"https://doi.org/10.1145/1321261.1321289","url":null,"abstract":"Image based Relighting(IBRL) has attracted a lot of interest in the computer graphics research, gaming, and virtual cinematography communities for its ability to relight objects or scenes, from novel illuminations captured in natural or synthetic environments. However, the advantages of an image-based framework conflicts with a drastic increase in the storage caused by the huge number of reference images pre-captured under various illumination conditions. To perform fast relighting, while maintaining the visual fidelity, one needs to preprocess this huge data into an appropriate model.\u0000 In this paper, we propose a novel and efficient two-stage relighting algorithm which creates a compact representation of the huge IBRL dataset and facilitates fast relighting. In the first stage, using Singular Value Decomposition, a set of eigen image bases and relighting coefficients are computed. In the second stage, and in contrast to prior methods, the correlation among the relighting coefficients is harnessed using Spherical Harmonics. The proposed method thus has lower memory and computational requirements. We demonstrate our results qualitatively and quantitatively with new generated image data.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126678039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper introduces a new approximate method for rendering translucent materials. We represent the surface around a point to be rendered in Monge's form using principal curvatures. The subsurface reflectance equation in the dipole diffusion approximation [Jensen et al. 2001] is then integrated over the surface. The outgoing radiance at the point is expressed as a function of principal curvatures and light vector components along the principal directions. This function can be precomputed as a 2D lookup table, which can be stored as a texture image. The paper presents preliminary results of our work on implementation of the model described.
本文介绍了一种新的半透明材料的近似渲染方法。我们用主曲率表示一个点周围的表面,以蒙日的形式呈现。然后在表面上积分偶极子扩散近似中的地下反射方程[Jensen et al. 2001]。该点的出射亮度表示为沿主方向的主曲率和光矢量分量的函数。这个函数可以预先计算为一个2D查找表,它可以存储为纹理图像。本文介绍了我们对所描述的模型的实施工作的初步结果。
{"title":"Curvature-based shading of translucent materials, such as human skin","authors":"K. Kolchin","doi":"10.1145/1321261.1321304","DOIUrl":"https://doi.org/10.1145/1321261.1321304","url":null,"abstract":"The paper introduces a new approximate method for rendering translucent materials. We represent the surface around a point to be rendered in Monge's form using principal curvatures. The subsurface reflectance equation in the dipole diffusion approximation [Jensen et al. 2001] is then integrated over the surface. The outgoing radiance at the point is expressed as a function of principal curvatures and light vector components along the principal directions. This function can be precomputed as a 2D lookup table, which can be stored as a texture image. The paper presents preliminary results of our work on implementation of the model described.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122057044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a framework for the interactive generation of 3D panorama maps. Our approach addresses the main issue that occurs during panorama map construction: non-linear projection or deformation of the terrain in order to minimize the occlusion of important information such as roads and trails. Traditionally, panorama maps are hand-drawn by skilled illustrators. In contrast, our approach provides computer support for the rendering of non-occluded views of 3D panorama maps, where deformations are modeled by nonlinear ray tracing. The deflection of rays is influenced by 2D and 3D force fields that directly consider the shape of the terrain. In addition, our framework allows the user to further modify the force fields to have fine control over the deformations of the panorama map. User interaction is facilitated by our real-time rendering system in terms of linked multiple views of both linear and non-linear projected terrain and the deformed view rays. Fast rendering is achieved by GPU-based non-linear ray tracing. We demonstrate the usefulness of our modeling and visualization method by several examples.
{"title":"Panorama maps with non-linear ray tracing","authors":"M. Falk, T. Schafhitzel, D. Weiskopf, T. Ertl","doi":"10.1145/1321261.1321263","DOIUrl":"https://doi.org/10.1145/1321261.1321263","url":null,"abstract":"We present a framework for the interactive generation of 3D panorama maps. Our approach addresses the main issue that occurs during panorama map construction: non-linear projection or deformation of the terrain in order to minimize the occlusion of important information such as roads and trails. Traditionally, panorama maps are hand-drawn by skilled illustrators. In contrast, our approach provides computer support for the rendering of non-occluded views of 3D panorama maps, where deformations are modeled by nonlinear ray tracing. The deflection of rays is influenced by 2D and 3D force fields that directly consider the shape of the terrain. In addition, our framework allows the user to further modify the force fields to have fine control over the deformations of the panorama map. User interaction is facilitated by our real-time rendering system in terms of linked multiple views of both linear and non-linear projected terrain and the deformed view rays. Fast rendering is achieved by GPU-based non-linear ray tracing. We demonstrate the usefulness of our modeling and visualization method by several examples.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121433313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Because of their geometric complexity, high resolution 3D models, either designed in high-end modeling packages or acquired with range scanning devices, cannot be directly used in applications that require rendering at interactive framerates. One clever method to overcome this limitation is to perform an appearance preserving geometry simplification, by replacing the original model with a low resolution mesh equipped with high resolution normal maps. This process visually preserves small scale features from the initial geometry, while only requiring a reduced set of polygons. However, this conversion usually relies on some kind of global or piecewise parameterization, combined with the generation of a texture atlas, a process that is computationally expensive and requires precise user supervision. In this paper, we propose an alternative method in which the normal field of a high resolution model is adaptively sampled and encoded in an octree-based data structure, that we call appearance preserving octree-texture (APO). Our main contributions are: a normal-driven octree generation, a compact encoding and an efficient look-up algorithm. Our method is efficient, totally automatic, and avoids the expensive creation of a parameterization with its corresponding texture atlas.
{"title":"Appearance preserving octree-textures","authors":"J. Lacoste, T. Boubekeur, B. Jobard, C. Schlick","doi":"10.1145/1321261.1321277","DOIUrl":"https://doi.org/10.1145/1321261.1321277","url":null,"abstract":"Because of their geometric complexity, high resolution 3D models, either designed in high-end modeling packages or acquired with range scanning devices, cannot be directly used in applications that require rendering at interactive framerates. One clever method to overcome this limitation is to perform an appearance preserving geometry simplification, by replacing the original model with a low resolution mesh equipped with high resolution normal maps. This process visually preserves small scale features from the initial geometry, while only requiring a reduced set of polygons. However, this conversion usually relies on some kind of global or piecewise parameterization, combined with the generation of a texture atlas, a process that is computationally expensive and requires precise user supervision. In this paper, we propose an alternative method in which the normal field of a high resolution model is adaptively sampled and encoded in an octree-based data structure, that we call appearance preserving octree-texture (APO). Our main contributions are: a normal-driven octree generation, a compact encoding and an efficient look-up algorithm. Our method is efficient, totally automatic, and avoids the expensive creation of a parameterization with its corresponding texture atlas.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130485782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}