We present an optimization of the water column-based height-field approach of water simulation by reducing memory footprint and promoting parallel implementation. The simulation still provides three-dimensional fluid animation suitable for water flowing on irregular terrains, intended for interactive applications. Our approach avoids the creation and storage of redundant virtual pipes between columns of water, and removes output dependency for the parallel implementation. We show a GPU implementation of the proposed method that runs at near interactive frame rates with rich lighting effects on the water surface, making it efficient for water animation on natural terrains for Computer Graphics.
{"title":"Efficient animation of water flow on irregular terrains","authors":"M. Maes, T. Fujimoto, Norishige Chiba","doi":"10.1145/1174429.1174447","DOIUrl":"https://doi.org/10.1145/1174429.1174447","url":null,"abstract":"We present an optimization of the water column-based height-field approach of water simulation by reducing memory footprint and promoting parallel implementation. The simulation still provides three-dimensional fluid animation suitable for water flowing on irregular terrains, intended for interactive applications. Our approach avoids the creation and storage of redundant virtual pipes between columns of water, and removes output dependency for the parallel implementation. We show a GPU implementation of the proposed method that runs at near interactive frame rates with rich lighting effects on the water surface, making it efficient for water animation on natural terrains for Computer Graphics.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"244 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114441737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel technique for capturing spatially and temporally resolved light probe sequences, and using them for rendering. For this purpose we have designed and built a Real Time Light Probe; a catadioptric imaging system that can capture the full dynamic range of the lighting incident at each point in space at video frame rates, while being moved through a scene. The Real Time Light Probe uses a digital imaging system which we have programmed to capture high quality, photometrically accurate color images with a dynamic range of 10,000,000:1 at 25 frames per second.By tracking the position and orientation of the light probe, it is possible to transform each light probe into a common frame of reference in world coordinates, and map each point in space along the path of motion to a particular frame in the light probe sequence. We demonstrate our technique by rendering synthetic objects illuminated by complex real world lighting, using both traditional image based lighting methods with temporally varying light probe illumination and an extension to handle spatially varying lighting conditions across large objects.
{"title":"Densely sampled light probe sequences for spatially variant image based lighting","authors":"J. Unger, S. Gustavson, A. Ynnerman","doi":"10.1145/1174429.1174487","DOIUrl":"https://doi.org/10.1145/1174429.1174487","url":null,"abstract":"We present a novel technique for capturing spatially and temporally resolved light probe sequences, and using them for rendering. For this purpose we have designed and built a Real Time Light Probe; a catadioptric imaging system that can capture the full dynamic range of the lighting incident at each point in space at video frame rates, while being moved through a scene. The Real Time Light Probe uses a digital imaging system which we have programmed to capture high quality, photometrically accurate color images with a dynamic range of 10,000,000:1 at 25 frames per second.By tracking the position and orientation of the light probe, it is possible to transform each light probe into a common frame of reference in world coordinates, and map each point in space along the path of motion to a particular frame in the light probe sequence. We demonstrate our technique by rendering synthetic objects illuminated by complex real world lighting, using both traditional image based lighting methods with temporally varying light probe illumination and an extension to handle spatially varying lighting conditions across large objects.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121433431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we compute the distance transform of a 3D triangle mesh. A volumetric voxel representation is defined over the mesh to evaluate the distance transform. Optimizations are described to efficiently manipulate the volumetric data structure that represents the mesh. A new method for adaptive filtering of the distance transform is introduced to smooth and reduce the noise on the meshes that were reconstructed from scanned data acquired with a 3D scanner. A modified version of the Marching Cube algorithm is presented to correctly reconstruct the final mesh of the filtered distance transform defined with the voxel representation. The new filtering method is feature preserving and it is more versatile than previous algorithms described in the literature. Results show that this method outperforms previous ones in term of an error metric comparison. Future works are discussed to improve the new method and its computing performances.
{"title":"3D distance transform adaptive filtering for smoothing and denoising triangle meshes","authors":"M. Fournier, J. Dischler, D. Bechmann","doi":"10.1145/1174429.1174497","DOIUrl":"https://doi.org/10.1145/1174429.1174497","url":null,"abstract":"In this paper we compute the distance transform of a 3D triangle mesh. A volumetric voxel representation is defined over the mesh to evaluate the distance transform. Optimizations are described to efficiently manipulate the volumetric data structure that represents the mesh. A new method for adaptive filtering of the distance transform is introduced to smooth and reduce the noise on the meshes that were reconstructed from scanned data acquired with a 3D scanner. A modified version of the Marching Cube algorithm is presented to correctly reconstruct the final mesh of the filtered distance transform defined with the voxel representation. The new filtering method is feature preserving and it is more versatile than previous algorithms described in the literature. Results show that this method outperforms previous ones in term of an error metric comparison. Future works are discussed to improve the new method and its computing performances.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"131 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124960740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Texture-mapping hardware has been successfully exploited for volume rendering. In this paper, we combine splatting method with 2D texture mapping efficiently and propose an algorithm for footprint algorithm based volume rendering accelerated by multi texture mapping. First of all, a regular data set is divided by some texture slices along the primary viewing direction. Then the segmented data are projected to the texture slices plane. Lastly, all these texture slices compose the final image by blending. With our algorithm, the scaled volume data set can be rendered effectively and fast but not degrade the quality of image remarkably.
{"title":"An accelerating splatting algorithm based on multi-texture mapping for volume rendering","authors":"Han Xiao, De-Gui Xiao","doi":"10.1145/1174429.1174464","DOIUrl":"https://doi.org/10.1145/1174429.1174464","url":null,"abstract":"Texture-mapping hardware has been successfully exploited for volume rendering. In this paper, we combine splatting method with 2D texture mapping efficiently and propose an algorithm for footprint algorithm based volume rendering accelerated by multi texture mapping. First of all, a regular data set is divided by some texture slices along the primary viewing direction. Then the segmented data are projected to the texture slices plane. Lastly, all these texture slices compose the final image by blending. With our algorithm, the scaled volume data set can be rendered effectively and fast but not degrade the quality of image remarkably.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"75 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131922209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of textures provides a rich and diverse set of possibilities for the visualization of flow data. In this paper, we present methods designed to produce oriented and controlled textures that accurately reflect the complex patterns that occur in vector field visualizations. We offer new insights based on the specification and classification of neighborhood models for synthesizing a texture that accurately depicts a vector field. Secondly, we introduce a computationally efficient method of texture mapping streamlines utilizing outlining textures to depict flow orientation.
{"title":"Directional enhancement in texture-based vector field visualization","authors":"Francesca Taponecco, T. Urness, V. Interrante","doi":"10.1145/1174429.1174463","DOIUrl":"https://doi.org/10.1145/1174429.1174463","url":null,"abstract":"The use of textures provides a rich and diverse set of possibilities for the visualization of flow data. In this paper, we present methods designed to produce oriented and controlled textures that accurately reflect the complex patterns that occur in vector field visualizations. We offer new insights based on the specification and classification of neighborhood models for synthesizing a texture that accurately depicts a vector field. Secondly, we introduce a computationally efficient method of texture mapping streamlines utilizing outlining textures to depict flow orientation.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116686592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While the parametric curve is overwhelmingly used in current character motion animation practice, the utilization of the implicit curve is recognized to be extremely potential for motion animation since the implicit surface have succeeded in such application as modeling, deformation and rendering within computer graphics and animation. In this paper, we advocate the use of implicit curves in motion animation. Implicit curves for task specification in motion animation is proposed and further explored in this paper. A planar implicit curve is used to represent the motion path while a speed profile curve is adopted to describe the motion timing. Then an approach and algorithm to motion inbetweening along implicit curves is proposed. Based on the motion path and motion speed, curve oriented inbetweening technique is utilized to generate inbetween position sequences in Cartesian space from which frame inbetweens in parametric space for animation are obtained by inverse kinematics.
{"title":"Implicit curve oriented inbetweening for motion animation","authors":"Haiyin Xu, Dan Li, Jian Wang","doi":"10.1145/1174429.1174443","DOIUrl":"https://doi.org/10.1145/1174429.1174443","url":null,"abstract":"While the parametric curve is overwhelmingly used in current character motion animation practice, the utilization of the implicit curve is recognized to be extremely potential for motion animation since the implicit surface have succeeded in such application as modeling, deformation and rendering within computer graphics and animation. In this paper, we advocate the use of implicit curves in motion animation. Implicit curves for task specification in motion animation is proposed and further explored in this paper. A planar implicit curve is used to represent the motion path while a speed profile curve is adopted to describe the motion timing. Then an approach and algorithm to motion inbetweening along implicit curves is proposed. Based on the motion path and motion speed, curve oriented inbetweening technique is utilized to generate inbetween position sequences in Cartesian space from which frame inbetweens in parametric space for animation are obtained by inverse kinematics.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115387736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wrinkles are important for realistic facial animation and modeling because they aid in recognizing human's expressions as well as person's age. Different techniques have been used to generate wrinkles, whether it is fine-scale or large-scale wrinkles. This paper presents a technique on modeling large-scale wrinkles, also known as the expressive wrinkle, on human face by using points instead of triangular meshes. The wrinkle will be modeled on a drawing basis, where users can see the effect directly once the shape and location of the wrinkle has been specified on the 3D face mesh itself. After that, the data involved in modeling the wrinkle are retrieved and processed. A new wrinkle shape function will be introduced to capture the realism of the wrinkle generated where it will be applied during the process.
{"title":"Modeling expressive wrinkle on human face","authors":"Nurazlin Zainal Azmi, R. Rahmat, R. Mahmod","doi":"10.1145/1174429.1174500","DOIUrl":"https://doi.org/10.1145/1174429.1174500","url":null,"abstract":"Wrinkles are important for realistic facial animation and modeling because they aid in recognizing human's expressions as well as person's age. Different techniques have been used to generate wrinkles, whether it is fine-scale or large-scale wrinkles. This paper presents a technique on modeling large-scale wrinkles, also known as the expressive wrinkle, on human face by using points instead of triangular meshes. The wrinkle will be modeled on a drawing basis, where users can see the effect directly once the shape and location of the wrinkle has been specified on the 3D face mesh itself. After that, the data involved in modeling the wrinkle are retrieved and processed. A new wrinkle shape function will be introduced to capture the realism of the wrinkle generated where it will be applied during the process.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127282419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present a new global hierarchical Z-space sort-last algorithm for cluster parallel graphics architectures that improves upon algorithms used so far for high performance super-graphics. The new algorithm bypasses limitations of sort-last tile based parallelization paradigms, and solves some known Z-space parallelization inefficiencies. The algorithm is implemented as a global hierarchical-Z system which allows GPUs to perform high frequency global intra-frame Z-culling and distributed final frame Z-determination. The new algorithm allows for full one-to-one process-GPU coupling with minimal inter-process and inter-GPU communications. This enables maximal input bandwidth, maximum GPU utilization levels, near optimal load balances and improved efficiency when scaled to larger configurations.
{"title":"A global hierarchical Z space algorithm for cluster parallel graphics architectures","authors":"A. Santilli, Ewa Huebner","doi":"10.1145/1174429.1174451","DOIUrl":"https://doi.org/10.1145/1174429.1174451","url":null,"abstract":"In this paper we present a new global hierarchical Z-space sort-last algorithm for cluster parallel graphics architectures that improves upon algorithms used so far for high performance super-graphics. The new algorithm bypasses limitations of sort-last tile based parallelization paradigms, and solves some known Z-space parallelization inefficiencies. The algorithm is implemented as a global hierarchical-Z system which allows GPUs to perform high frequency global intra-frame Z-culling and distributed final frame Z-determination. The new algorithm allows for full one-to-one process-GPU coupling with minimal inter-process and inter-GPU communications. This enables maximal input bandwidth, maximum GPU utilization levels, near optimal load balances and improved efficiency when scaled to larger configurations.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129231744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The explosion in the number of cameras surveilling the environment in recent years is generating a need for systems capable of analysing video streams for important events. This paper outlines a system for detecting noteworthy behaviours (from a security or surveillance perspective) which does not involve the enumeration of the event sequences of all possible activities of interest. Instead the focus is on calculating a measure of the abnormality of the action taking place. This raises the need for a low complexity tracking algorithm robust to the noise artefacts present in video surveillance systems. The tracking technique described herein achieves this goal by using a "future history" buffer of images and so delaying the classification and tracking of objects by the time quantum which is the buffer size. This allows disambiguation of noise blobs and facilitates classification in the case of occlusions and disappearance of people due to lighting, failures in the background model etc.
{"title":"Tracking and video surveillance activity analysis","authors":"Michael Cheng, Binh Pham, D. Tjondronegoro","doi":"10.1145/1174429.1174491","DOIUrl":"https://doi.org/10.1145/1174429.1174491","url":null,"abstract":"The explosion in the number of cameras surveilling the environment in recent years is generating a need for systems capable of analysing video streams for important events. This paper outlines a system for detecting noteworthy behaviours (from a security or surveillance perspective) which does not involve the enumeration of the event sequences of all possible activities of interest. Instead the focus is on calculating a measure of the abnormality of the action taking place. This raises the need for a low complexity tracking algorithm robust to the noise artefacts present in video surveillance systems. The tracking technique described herein achieves this goal by using a \"future history\" buffer of images and so delaying the classification and tracking of objects by the time quantum which is the buffer size. This allows disambiguation of noise blobs and facilitates classification in the case of occlusions and disappearance of people due to lighting, failures in the background model etc.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121981502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel approach for automatically adding expressive attributes to motion capture based animations by utilizing learned behavior from the natural deformations of the human skeleton. The author envisions this system as part of a larger toolbox that enables animators to quickly modify the emotional qualities of motion capture data.
{"title":"Learned deformable skeletons for motion capture based animation","authors":"Alyssa Lees","doi":"10.1145/1174429.1174440","DOIUrl":"https://doi.org/10.1145/1174429.1174440","url":null,"abstract":"This paper presents a novel approach for automatically adding expressive attributes to motion capture based animations by utilizing learned behavior from the natural deformations of the human skeleton. The author envisions this system as part of a larger toolbox that enables animators to quickly modify the emotional qualities of motion capture data.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130590581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}