Pub Date : 2001-10-16DOI: 10.1109/PCCGA.2001.962875
M. Gross
Within the history of computer graphics a plenitude of sophisticated surface representations and graphics primitives have been devised, including splines, implicit surfaces, or hierarchical approaches. All of these methods aim at facilitating the creation, processing and display of graphics models with increasingly complex shape or surface detail. In spite of the sophistication of these methods the triangle has survived over decades as the major graphics primitive meeting a right balance between descriptive power and computational effort. As a consequence, today’s consumer graphics hardware is mostly tailored to high performance triangle processing. In addition, an upcoming repertoire of powerful geometric processing methods seems to foster the concept of triangle meshes for graphics modeling. In recent years, the emergence of affordable 3D scanning devices along with the demand for ever more geometric detail and rich organic shapes has created the need to process and render very large point sampled models efficiently. At data sizes where triangle based methods approach their limits point representations are receiving a growing attention. Unlike triangles, points have largely been neglected as a graphics primitive. Although being included in many APIs, it is only recently that point samples experience a renaissance in computer graphics. Conceptually, points provide a discretization of geometry without explicit storage of topology. Thus, point samples reduce the representation to the essentials needed for rendering and enable us to generate highly optimized object representations. Although the loss of topology poses great challenges for graphics processing, the latest generation of algorithms features high performance rendering, point/pixel shading, anisotropic texture mapping, and advanced signal processing of point sampled geometry. In this talk, I will introduce point samples as a versatile graphics primitive and present concepts for the acquisition, processing and rendering of large point sets. The first part of the talk discusses low-cost scanning devices and algorithms being used to reconstruct 3D point clouds from video image sequences. Powerful PC clusters allow for the real-time computation of the underlying image processing algorithms. Such concepts have been used within the ETH blue-c1 collaborative virtual environment. After the acquisition of raw point samples sophisticated postprocessing techniques are required to clean, denoise, enhance, or smooth the data. The second part of this talk presents our latest concepts for generalizing Fourier transforms to point sampled geometry. The method constitutes a partitioning of the point set and computes a local spectral decomposition for each patch using the FFT. The notion of frequency gives us access to a rich repertoire of signal processing methods including lowpass or highpass filtering, spectral estimation and resampling. The third part of my talk is dedicated to the concepts we developed
{"title":"Processing and rendering of point sampled geometry","authors":"M. Gross","doi":"10.1109/PCCGA.2001.962875","DOIUrl":"https://doi.org/10.1109/PCCGA.2001.962875","url":null,"abstract":"Within the history of computer graphics a plenitude of sophisticated surface representations and graphics primitives have been devised, including splines, implicit surfaces, or hierarchical approaches. All of these methods aim at facilitating the creation, processing and display of graphics models with increasingly complex shape or surface detail. In spite of the sophistication of these methods the triangle has survived over decades as the major graphics primitive meeting a right balance between descriptive power and computational effort. As a consequence, today’s consumer graphics hardware is mostly tailored to high performance triangle processing. In addition, an upcoming repertoire of powerful geometric processing methods seems to foster the concept of triangle meshes for graphics modeling. In recent years, the emergence of affordable 3D scanning devices along with the demand for ever more geometric detail and rich organic shapes has created the need to process and render very large point sampled models efficiently. At data sizes where triangle based methods approach their limits point representations are receiving a growing attention. Unlike triangles, points have largely been neglected as a graphics primitive. Although being included in many APIs, it is only recently that point samples experience a renaissance in computer graphics. Conceptually, points provide a discretization of geometry without explicit storage of topology. Thus, point samples reduce the representation to the essentials needed for rendering and enable us to generate highly optimized object representations. Although the loss of topology poses great challenges for graphics processing, the latest generation of algorithms features high performance rendering, point/pixel shading, anisotropic texture mapping, and advanced signal processing of point sampled geometry. In this talk, I will introduce point samples as a versatile graphics primitive and present concepts for the acquisition, processing and rendering of large point sets. The first part of the talk discusses low-cost scanning devices and algorithms being used to reconstruct 3D point clouds from video image sequences. Powerful PC clusters allow for the real-time computation of the underlying image processing algorithms. Such concepts have been used within the ETH blue-c1 collaborative virtual environment. After the acquisition of raw point samples sophisticated postprocessing techniques are required to clean, denoise, enhance, or smooth the data. The second part of this talk presents our latest concepts for generalizing Fourier transforms to point sampled geometry. The method constitutes a partitioning of the point set and computes a local spectral decomposition for each patch using the FFT. The notion of frequency gives us access to a rich repertoire of signal processing methods including lowpass or highpass filtering, spectral estimation and resampling. The third part of my talk is dedicated to the concepts we developed","PeriodicalId":387699,"journal":{"name":"Proceedings Ninth Pacific Conference on Computer Graphics and Applications. Pacific Graphics 2001","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123075648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-16DOI: 10.1109/PCCGA.2001.962892
D. Kao, A. Pang
The paper proposes the use of specially generated 3D procedural textures for visualizing steady state 2D flow fields. We use the flow field to advect and animate the texture over time. However, using standard texture advection techniques and arbitrary textures will introduce some undesirable effects such as: (a) expanding texture from a critical source point, (b) streaking pattern from the boundary of the flow field, (c) crowding of advected textures near an attracting spiral or sink, and (d) absent or lack of textures in some regions of the flow. The paper proposes a number of strategies to solve these problems. We demonstrate how the technique works using both synthetic data and computational fluid dynamics data.
{"title":"Advecting procedural textures for 2D flow animation","authors":"D. Kao, A. Pang","doi":"10.1109/PCCGA.2001.962892","DOIUrl":"https://doi.org/10.1109/PCCGA.2001.962892","url":null,"abstract":"The paper proposes the use of specially generated 3D procedural textures for visualizing steady state 2D flow fields. We use the flow field to advect and animate the texture over time. However, using standard texture advection techniques and arbitrary textures will introduce some undesirable effects such as: (a) expanding texture from a critical source point, (b) streaking pattern from the boundary of the flow field, (c) crowding of advected textures near an attracting spiral or sink, and (d) absent or lack of textures in some regions of the flow. The paper proposes a number of strategies to solve these problems. We demonstrate how the technique works using both synthetic data and computational fluid dynamics data.","PeriodicalId":387699,"journal":{"name":"Proceedings Ninth Pacific Conference on Computer Graphics and Applications. Pacific Graphics 2001","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129184539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-16DOI: 10.1109/PCCGA.2001.962852
M. Isenburg, J. Snoeyink
Many polygon meshes have properties such as shading normals, colours, texture coordinates, and/or material attributes that are associated with the vertices, faces or corners of the mesh. While current research in mesh compression has focused on connectivity and geometry coding, the compression of properties has received less attention. There are two kinds of information to compress. One specifies each individual property: the property values. The other describes how the properties are attached to the mesh: the property mapping. The authors introduce a predictive compression scheme for the property mapping that is 2 to 10 times more compact than previously reported methods.
{"title":"Compressing the property mapping of polygon meshes","authors":"M. Isenburg, J. Snoeyink","doi":"10.1109/PCCGA.2001.962852","DOIUrl":"https://doi.org/10.1109/PCCGA.2001.962852","url":null,"abstract":"Many polygon meshes have properties such as shading normals, colours, texture coordinates, and/or material attributes that are associated with the vertices, faces or corners of the mesh. While current research in mesh compression has focused on connectivity and geometry coding, the compression of properties has received less attention. There are two kinds of information to compress. One specifies each individual property: the property values. The other describes how the properties are attached to the mesh: the property mapping. The authors introduce a predictive compression scheme for the property mapping that is 2 to 10 times more compact than previously reported methods.","PeriodicalId":387699,"journal":{"name":"Proceedings Ninth Pacific Conference on Computer Graphics and Applications. Pacific Graphics 2001","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124094048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-16DOI: 10.1109/PCCGA.2001.962858
T. Michikawa, T. Kanai, M. Fujita, H. Chiyokura
The authors propose a novel multiresolution-based shape representation for 3D mesh morphing. Our approach does not use combination operations that caused some serious problems in the previous approaches for mesh morphing. Therefore, we can calculate a hierarchical interpolation mesh robustly using two types of subdivision fitting schemes. Our new representation has a hierarchical semiregular mesh structure based on subdivision connectivity. This leads to various advantages including efficient data storage, and easy acquisition of an interpolation mesh with arbitrary subdivision level. We also demonstrate several new features for 3D morphing using multiresolution interpolation meshes.
{"title":"Multiresolution interpolation meshes","authors":"T. Michikawa, T. Kanai, M. Fujita, H. Chiyokura","doi":"10.1109/PCCGA.2001.962858","DOIUrl":"https://doi.org/10.1109/PCCGA.2001.962858","url":null,"abstract":"The authors propose a novel multiresolution-based shape representation for 3D mesh morphing. Our approach does not use combination operations that caused some serious problems in the previous approaches for mesh morphing. Therefore, we can calculate a hierarchical interpolation mesh robustly using two types of subdivision fitting schemes. Our new representation has a hierarchical semiregular mesh structure based on subdivision connectivity. This leads to various advantages including efficient data storage, and easy acquisition of an interpolation mesh with arbitrary subdivision level. We also demonstrate several new features for 3D morphing using multiresolution interpolation meshes.","PeriodicalId":387699,"journal":{"name":"Proceedings Ninth Pacific Conference on Computer Graphics and Applications. Pacific Graphics 2001","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122602026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-16DOI: 10.1109/PCCGA.2001.962889
K. Anjyo
The author presents the digital techniques and know-how that have been motivated, created and tried out through our work for cel animation. The techniques focus on extracting three-dimensional characteristics from two-dimensional images of the scene to be animated, or on merging 2D and 3D animations. Several animation examples are also demonstrated to show our practical skills, such as those for camera projection mapping for 3D structuring the 2D drawing scenes, and implicit/explicit use of 3D CG techniques for visual effects.
{"title":"Bridging the gap between 2D and 3D: a stream of digital animation techniques","authors":"K. Anjyo","doi":"10.1109/PCCGA.2001.962889","DOIUrl":"https://doi.org/10.1109/PCCGA.2001.962889","url":null,"abstract":"The author presents the digital techniques and know-how that have been motivated, created and tried out through our work for cel animation. The techniques focus on extracting three-dimensional characteristics from two-dimensional images of the scene to be animated, or on merging 2D and 3D animations. Several animation examples are also demonstrated to show our practical skills, such as those for camera projection mapping for 3D structuring the 2D drawing scenes, and implicit/explicit use of 3D CG techniques for visual effects.","PeriodicalId":387699,"journal":{"name":"Proceedings Ninth Pacific Conference on Computer Graphics and Applications. Pacific Graphics 2001","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133074069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-16DOI: 10.1109/PCCGA.2001.962868
K. Miura, F. Cheng, Lazhu Wang
A deformation-based fine tuning technique for parametric curves and surfaces is presented. A curve or surface is deformed by scaling its derivative, instead of manipulating its control points. Since only the norm of the derivative is adjusted, the resulting curve or surface keeps the basic shape of the original profile and curvature distribution. Therefore, the new technique is especially suitable for last minute fine tuning of the design process. Other advantages include: (1) the fine tuning process is a real local method, it can be performed on any portion of a curve or a surface, not just on a set of segments or patches; (2) by allowing a user to drag a scalar function to directly adjust the curvature (and, consequently, fairness) of a curve or surface, the new technique makes the shape design process more intuitive and effective; (3) the new technique is suitable for precise shaping and deforming such as making the curvature of a specific portion twice as big. In many cases, it can achieve results that other methods such as FFD can not; (4) the fine tuning process can also be used for subdivision curves and surfaces. Related techniques and test results are included.
{"title":"Fine tuning: curve and surface deformation by scaling derivatives","authors":"K. Miura, F. Cheng, Lazhu Wang","doi":"10.1109/PCCGA.2001.962868","DOIUrl":"https://doi.org/10.1109/PCCGA.2001.962868","url":null,"abstract":"A deformation-based fine tuning technique for parametric curves and surfaces is presented. A curve or surface is deformed by scaling its derivative, instead of manipulating its control points. Since only the norm of the derivative is adjusted, the resulting curve or surface keeps the basic shape of the original profile and curvature distribution. Therefore, the new technique is especially suitable for last minute fine tuning of the design process. Other advantages include: (1) the fine tuning process is a real local method, it can be performed on any portion of a curve or a surface, not just on a set of segments or patches; (2) by allowing a user to drag a scalar function to directly adjust the curvature (and, consequently, fairness) of a curve or surface, the new technique makes the shape design process more intuitive and effective; (3) the new technique is suitable for precise shaping and deforming such as making the curvature of a specific portion twice as big. In many cases, it can achieve results that other methods such as FFD can not; (4) the fine tuning process can also be used for subdivision curves and surfaces. Related techniques and test results are included.","PeriodicalId":387699,"journal":{"name":"Proceedings Ninth Pacific Conference on Computer Graphics and Applications. Pacific Graphics 2001","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133780705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing methods of morphing 3D meshes are often limited to cases in which 3D input meshes to be morphed are topologically equivalent. The paper presents a new method for morphing 3D meshes having different surface topological types. The most significant feature of the method is that it allows explicit control of topological transitions that occur during the morph. Transitions of topological types are specified by means of a compact formalism that resulted from a rigorous examination of singularities of 4D hypersurfaces and embeddings of meshes in 3D space. Using the formalism, every plausible path of topological transitions can be classified into a small set of cases. In order to guide a topological transition during the morph, our method employs a key frame that binds two distinct surface topological types. The key frame consists of a pair of "faces", each of which is homeomorphic to one of the source (input) 3D meshes. Interpolating the source meshes and the key frame by using a tetrahedral 4D mesh and then intersecting the interpolating mesh with another 4D hypersurface creates a morphed 3D mesh. We demonstrate the power of our methodology by using several examples of topology transcending morphing.
{"title":"Explicit control of topological transitions in morphing shapes of 3D meshes","authors":"Shigeo Takahashi, Yoshiyuki Kokojima, Ryutarou Ohbuchi","doi":"10.1109/PCCGA.2001.962859","DOIUrl":"https://doi.org/10.1109/PCCGA.2001.962859","url":null,"abstract":"Existing methods of morphing 3D meshes are often limited to cases in which 3D input meshes to be morphed are topologically equivalent. The paper presents a new method for morphing 3D meshes having different surface topological types. The most significant feature of the method is that it allows explicit control of topological transitions that occur during the morph. Transitions of topological types are specified by means of a compact formalism that resulted from a rigorous examination of singularities of 4D hypersurfaces and embeddings of meshes in 3D space. Using the formalism, every plausible path of topological transitions can be classified into a small set of cases. In order to guide a topological transition during the morph, our method employs a key frame that binds two distinct surface topological types. The key frame consists of a pair of \"faces\", each of which is homeomorphic to one of the source (input) 3D meshes. Interpolating the source meshes and the key frame by using a tetrahedral 4D mesh and then intersecting the interpolating mesh with another 4D hypersurface creates a morphed 3D mesh. We demonstrate the power of our methodology by using several examples of topology transcending morphing.","PeriodicalId":387699,"journal":{"name":"Proceedings Ninth Pacific Conference on Computer Graphics and Applications. Pacific Graphics 2001","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121919585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-16DOI: 10.1109/PCCGA.2001.962884
T. Ullmann, Daniel Beier, B. Brüderlin, Alexander Schmidt
The paper presents "vertex tracing", a new approach to adaptive progressive ray tracing. The software achieves real-time rendering performance of physically accurate reflection effects on specified scene objects, even with complex scenes in virtual reality applications. The approach is based on hybrid rendering, combining an OpenGL-generated scene with correct reflection characteristics of selected scene objects. The real-time performance of the vertex tracer is achieved with progressive adaptive refinement of the geometry in object space, and with parallelization of the algorithm. Both shared memory and distributed memory architectures have been investigated. Mesh-based load balancing yields a uniform distribution of the computing load, also in a heterogeneous network with resources of widely varying performance. The performance of the overall system is demonstrated with a truck interior in a virtual reality simulator.
{"title":"Adaptive progressive vertex tracing in distributed environments","authors":"T. Ullmann, Daniel Beier, B. Brüderlin, Alexander Schmidt","doi":"10.1109/PCCGA.2001.962884","DOIUrl":"https://doi.org/10.1109/PCCGA.2001.962884","url":null,"abstract":"The paper presents \"vertex tracing\", a new approach to adaptive progressive ray tracing. The software achieves real-time rendering performance of physically accurate reflection effects on specified scene objects, even with complex scenes in virtual reality applications. The approach is based on hybrid rendering, combining an OpenGL-generated scene with correct reflection characteristics of selected scene objects. The real-time performance of the vertex tracer is achieved with progressive adaptive refinement of the geometry in object space, and with parallelization of the algorithm. Both shared memory and distributed memory architectures have been investigated. Mesh-based load balancing yields a uniform distribution of the computing load, also in a heterogeneous network with resources of widely varying performance. The performance of the overall system is demonstrated with a truck interior in a virtual reality simulator.","PeriodicalId":387699,"journal":{"name":"Proceedings Ninth Pacific Conference on Computer Graphics and Applications. Pacific Graphics 2001","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125839989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-16DOI: 10.1109/PCCGA.2001.962864
Yan-Tao Li, Shimin Hu, Jiaguang Sun
Determining redundant constraints is a critical task for geometric constraint solvers, since it dramatically affects the solution speed, accuracy, and stability. The paper attempts to determine the numerical redundancies of three-dimensional geometric constraint systems via a disturbance method. The constraints are translated into some unified forms and added to a constraint system incrementally. The redundancy of a constraint can then be decided by disturbing its value. We also prove that graph reduction methods can be used to accelerate the determination process.
{"title":"On the numerical redundancies of geometric constraint systems","authors":"Yan-Tao Li, Shimin Hu, Jiaguang Sun","doi":"10.1109/PCCGA.2001.962864","DOIUrl":"https://doi.org/10.1109/PCCGA.2001.962864","url":null,"abstract":"Determining redundant constraints is a critical task for geometric constraint solvers, since it dramatically affects the solution speed, accuracy, and stability. The paper attempts to determine the numerical redundancies of three-dimensional geometric constraint systems via a disturbance method. The constraints are translated into some unified forms and added to a constraint system incrementally. The redundancy of a constraint can then be decided by disturbing its value. We also prove that graph reduction methods can be used to accelerate the determination process.","PeriodicalId":387699,"journal":{"name":"Proceedings Ninth Pacific Conference on Computer Graphics and Applications. Pacific Graphics 2001","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125486711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2001-10-16DOI: 10.1109/PCCGA.2001.962887
Tong-Bo Chen, Baocai Yin, Wan-Jun Huang, Dehui Kong
The authors present a method to animate a human face under arbitrary lighting conditions. We first acquire the reflectance models of a human face in various expressions by employing a robot to precisely control the light position. By developing a relighting tool, named LUXMASTER, we achieve great freedom to change the illumination condition and get a variety of photorealistic novel renderings of the human face. Then, we introduce morphing techniques to add the time dimension and produce a clip of facial animation with varying illumination and expression. In order to make the morphing work easily, we design an expressive, intuitive and efficient tool, called FLUIDMAN, which at the same time enables an enormous possibility of visual effects.
{"title":"Animating human face under arbitrary illumination","authors":"Tong-Bo Chen, Baocai Yin, Wan-Jun Huang, Dehui Kong","doi":"10.1109/PCCGA.2001.962887","DOIUrl":"https://doi.org/10.1109/PCCGA.2001.962887","url":null,"abstract":"The authors present a method to animate a human face under arbitrary lighting conditions. We first acquire the reflectance models of a human face in various expressions by employing a robot to precisely control the light position. By developing a relighting tool, named LUXMASTER, we achieve great freedom to change the illumination condition and get a variety of photorealistic novel renderings of the human face. Then, we introduce morphing techniques to add the time dimension and produce a clip of facial animation with varying illumination and expression. In order to make the morphing work easily, we design an expressive, intuitive and efficient tool, called FLUIDMAN, which at the same time enables an enormous possibility of visual effects.","PeriodicalId":387699,"journal":{"name":"Proceedings Ninth Pacific Conference on Computer Graphics and Applications. Pacific Graphics 2001","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128049120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}