Animated crowds are effective to increase realism in virtual reality applications. However, rendering crowds requires large computational power. In this paper, we present a technique suitable to render large crowds of characters that takes advantage of existing programmable graphics hardware. Impostors are used for low-detail representation, while pseudo-instancing is used for higher detail. A LOD map is used to select between both representations, based on a customizable threshold.
{"title":"Impostors and pseudo-instancing for GPU crowd rendering","authors":"Erik Millán, Isaac Rudomín","doi":"10.1145/1174429.1174436","DOIUrl":"https://doi.org/10.1145/1174429.1174436","url":null,"abstract":"Animated crowds are effective to increase realism in virtual reality applications. However, rendering crowds requires large computational power. In this paper, we present a technique suitable to render large crowds of characters that takes advantage of existing programmable graphics hardware. Impostors are used for low-detail representation, while pseudo-instancing is used for higher detail. A LOD map is used to select between both representations, based on a customizable threshold.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"R-31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126631521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Texture-mapping hardware has been successfully exploited for volume rendering. In this paper, we combine splatting method with 2D texture mapping efficiently and propose an algorithm for footprint algorithm based volume rendering accelerated by multi texture mapping. First of all, a regular data set is divided by some texture slices along the primary viewing direction. Then the segmented data are projected to the texture slices plane. Lastly, all these texture slices compose the final image by blending. With our algorithm, the scaled volume data set can be rendered effectively and fast but not degrade the quality of image remarkably.
{"title":"An accelerating splatting algorithm based on multi-texture mapping for volume rendering","authors":"Han Xiao, De-Gui Xiao","doi":"10.1145/1174429.1174464","DOIUrl":"https://doi.org/10.1145/1174429.1174464","url":null,"abstract":"Texture-mapping hardware has been successfully exploited for volume rendering. In this paper, we combine splatting method with 2D texture mapping efficiently and propose an algorithm for footprint algorithm based volume rendering accelerated by multi texture mapping. First of all, a regular data set is divided by some texture slices along the primary viewing direction. Then the segmented data are projected to the texture slices plane. Lastly, all these texture slices compose the final image by blending. With our algorithm, the scaled volume data set can be rendered effectively and fast but not degrade the quality of image remarkably.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"75 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131922209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we compute the distance transform of a 3D triangle mesh. A volumetric voxel representation is defined over the mesh to evaluate the distance transform. Optimizations are described to efficiently manipulate the volumetric data structure that represents the mesh. A new method for adaptive filtering of the distance transform is introduced to smooth and reduce the noise on the meshes that were reconstructed from scanned data acquired with a 3D scanner. A modified version of the Marching Cube algorithm is presented to correctly reconstruct the final mesh of the filtered distance transform defined with the voxel representation. The new filtering method is feature preserving and it is more versatile than previous algorithms described in the literature. Results show that this method outperforms previous ones in term of an error metric comparison. Future works are discussed to improve the new method and its computing performances.
{"title":"3D distance transform adaptive filtering for smoothing and denoising triangle meshes","authors":"M. Fournier, J. Dischler, D. Bechmann","doi":"10.1145/1174429.1174497","DOIUrl":"https://doi.org/10.1145/1174429.1174497","url":null,"abstract":"In this paper we compute the distance transform of a 3D triangle mesh. A volumetric voxel representation is defined over the mesh to evaluate the distance transform. Optimizations are described to efficiently manipulate the volumetric data structure that represents the mesh. A new method for adaptive filtering of the distance transform is introduced to smooth and reduce the noise on the meshes that were reconstructed from scanned data acquired with a 3D scanner. A modified version of the Marching Cube algorithm is presented to correctly reconstruct the final mesh of the filtered distance transform defined with the voxel representation. The new filtering method is feature preserving and it is more versatile than previous algorithms described in the literature. Results show that this method outperforms previous ones in term of an error metric comparison. Future works are discussed to improve the new method and its computing performances.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"131 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124960740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the presented article we discuss and detail the general methodological approaches, the reconstruction strategies and the techniques that have been employed to achieve the 3D interactive real-time virtual visualization of the digitally restituted inhabited ancient sites of Aspendos and Pompeii, respectively simulated using a virtual and an augmented reality setup. More specifically, the two case studies to which we will refer to illustrate our general methodology concern the VR restitution of the Roman theatre of Aspendos in Turkey, visualized as it was in the 3rd century, and the on-site AR simulation of a digitally restored Thermopolium situated at the archeological site of Pompeii in Italy. In order to enhance both simulated 3D environments, either case study presents the inclusion of real time animated virtual humans which are re-enacting situations and activities that were typically performed in such sites during ancient times. Furthermore, the implemented modelling and illumination strategies, along with the design choices that were operated regarding both the preparation of the textured 3D models of the sites and the simulated virtual humans, and concerning their optimization in order to suit the needs of a real time interactive visualization, will be equally presented.
{"title":"Real-time animation of ancient Roman sites","authors":"N. Magnenat-Thalmann, A. Foni, Nedjma Cadi-Yazli","doi":"10.1145/1174429.1174432","DOIUrl":"https://doi.org/10.1145/1174429.1174432","url":null,"abstract":"In the presented article we discuss and detail the general methodological approaches, the reconstruction strategies and the techniques that have been employed to achieve the 3D interactive real-time virtual visualization of the digitally restituted inhabited ancient sites of Aspendos and Pompeii, respectively simulated using a virtual and an augmented reality setup. More specifically, the two case studies to which we will refer to illustrate our general methodology concern the VR restitution of the Roman theatre of Aspendos in Turkey, visualized as it was in the 3rd century, and the on-site AR simulation of a digitally restored Thermopolium situated at the archeological site of Pompeii in Italy. In order to enhance both simulated 3D environments, either case study presents the inclusion of real time animated virtual humans which are re-enacting situations and activities that were typically performed in such sites during ancient times. Furthermore, the implemented modelling and illumination strategies, along with the design choices that were operated regarding both the preparation of the textured 3D models of the sites and the simulated virtual humans, and concerning their optimization in order to suit the needs of a real time interactive visualization, will be equally presented.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116005113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel approach for automatically adding expressive attributes to motion capture based animations by utilizing learned behavior from the natural deformations of the human skeleton. The author envisions this system as part of a larger toolbox that enables animators to quickly modify the emotional qualities of motion capture data.
{"title":"Learned deformable skeletons for motion capture based animation","authors":"Alyssa Lees","doi":"10.1145/1174429.1174440","DOIUrl":"https://doi.org/10.1145/1174429.1174440","url":null,"abstract":"This paper presents a novel approach for automatically adding expressive attributes to motion capture based animations by utilizing learned behavior from the natural deformations of the human skeleton. The author envisions this system as part of a larger toolbox that enables animators to quickly modify the emotional qualities of motion capture data.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130590581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present a new global hierarchical Z-space sort-last algorithm for cluster parallel graphics architectures that improves upon algorithms used so far for high performance super-graphics. The new algorithm bypasses limitations of sort-last tile based parallelization paradigms, and solves some known Z-space parallelization inefficiencies. The algorithm is implemented as a global hierarchical-Z system which allows GPUs to perform high frequency global intra-frame Z-culling and distributed final frame Z-determination. The new algorithm allows for full one-to-one process-GPU coupling with minimal inter-process and inter-GPU communications. This enables maximal input bandwidth, maximum GPU utilization levels, near optimal load balances and improved efficiency when scaled to larger configurations.
{"title":"A global hierarchical Z space algorithm for cluster parallel graphics architectures","authors":"A. Santilli, Ewa Huebner","doi":"10.1145/1174429.1174451","DOIUrl":"https://doi.org/10.1145/1174429.1174451","url":null,"abstract":"In this paper we present a new global hierarchical Z-space sort-last algorithm for cluster parallel graphics architectures that improves upon algorithms used so far for high performance super-graphics. The new algorithm bypasses limitations of sort-last tile based parallelization paradigms, and solves some known Z-space parallelization inefficiencies. The algorithm is implemented as a global hierarchical-Z system which allows GPUs to perform high frequency global intra-frame Z-culling and distributed final frame Z-determination. The new algorithm allows for full one-to-one process-GPU coupling with minimal inter-process and inter-GPU communications. This enables maximal input bandwidth, maximum GPU utilization levels, near optimal load balances and improved efficiency when scaled to larger configurations.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129231744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The explosion in the number of cameras surveilling the environment in recent years is generating a need for systems capable of analysing video streams for important events. This paper outlines a system for detecting noteworthy behaviours (from a security or surveillance perspective) which does not involve the enumeration of the event sequences of all possible activities of interest. Instead the focus is on calculating a measure of the abnormality of the action taking place. This raises the need for a low complexity tracking algorithm robust to the noise artefacts present in video surveillance systems. The tracking technique described herein achieves this goal by using a "future history" buffer of images and so delaying the classification and tracking of objects by the time quantum which is the buffer size. This allows disambiguation of noise blobs and facilitates classification in the case of occlusions and disappearance of people due to lighting, failures in the background model etc.
{"title":"Tracking and video surveillance activity analysis","authors":"Michael Cheng, Binh Pham, D. Tjondronegoro","doi":"10.1145/1174429.1174491","DOIUrl":"https://doi.org/10.1145/1174429.1174491","url":null,"abstract":"The explosion in the number of cameras surveilling the environment in recent years is generating a need for systems capable of analysing video streams for important events. This paper outlines a system for detecting noteworthy behaviours (from a security or surveillance perspective) which does not involve the enumeration of the event sequences of all possible activities of interest. Instead the focus is on calculating a measure of the abnormality of the action taking place. This raises the need for a low complexity tracking algorithm robust to the noise artefacts present in video surveillance systems. The tracking technique described herein achieves this goal by using a \"future history\" buffer of images and so delaying the classification and tracking of objects by the time quantum which is the buffer size. This allows disambiguation of noise blobs and facilitates classification in the case of occlusions and disappearance of people due to lighting, failures in the background model etc.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121981502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of textures provides a rich and diverse set of possibilities for the visualization of flow data. In this paper, we present methods designed to produce oriented and controlled textures that accurately reflect the complex patterns that occur in vector field visualizations. We offer new insights based on the specification and classification of neighborhood models for synthesizing a texture that accurately depicts a vector field. Secondly, we introduce a computationally efficient method of texture mapping streamlines utilizing outlining textures to depict flow orientation.
{"title":"Directional enhancement in texture-based vector field visualization","authors":"Francesca Taponecco, T. Urness, V. Interrante","doi":"10.1145/1174429.1174463","DOIUrl":"https://doi.org/10.1145/1174429.1174463","url":null,"abstract":"The use of textures provides a rich and diverse set of possibilities for the visualization of flow data. In this paper, we present methods designed to produce oriented and controlled textures that accurately reflect the complex patterns that occur in vector field visualizations. We offer new insights based on the specification and classification of neighborhood models for synthesizing a texture that accurately depicts a vector field. Secondly, we introduce a computationally efficient method of texture mapping streamlines utilizing outlining textures to depict flow orientation.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116686592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wrinkles are important for realistic facial animation and modeling because they aid in recognizing human's expressions as well as person's age. Different techniques have been used to generate wrinkles, whether it is fine-scale or large-scale wrinkles. This paper presents a technique on modeling large-scale wrinkles, also known as the expressive wrinkle, on human face by using points instead of triangular meshes. The wrinkle will be modeled on a drawing basis, where users can see the effect directly once the shape and location of the wrinkle has been specified on the 3D face mesh itself. After that, the data involved in modeling the wrinkle are retrieved and processed. A new wrinkle shape function will be introduced to capture the realism of the wrinkle generated where it will be applied during the process.
{"title":"Modeling expressive wrinkle on human face","authors":"Nurazlin Zainal Azmi, R. Rahmat, R. Mahmod","doi":"10.1145/1174429.1174500","DOIUrl":"https://doi.org/10.1145/1174429.1174500","url":null,"abstract":"Wrinkles are important for realistic facial animation and modeling because they aid in recognizing human's expressions as well as person's age. Different techniques have been used to generate wrinkles, whether it is fine-scale or large-scale wrinkles. This paper presents a technique on modeling large-scale wrinkles, also known as the expressive wrinkle, on human face by using points instead of triangular meshes. The wrinkle will be modeled on a drawing basis, where users can see the effect directly once the shape and location of the wrinkle has been specified on the 3D face mesh itself. After that, the data involved in modeling the wrinkle are retrieved and processed. A new wrinkle shape function will be introduced to capture the realism of the wrinkle generated where it will be applied during the process.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127282419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While the parametric curve is overwhelmingly used in current character motion animation practice, the utilization of the implicit curve is recognized to be extremely potential for motion animation since the implicit surface have succeeded in such application as modeling, deformation and rendering within computer graphics and animation. In this paper, we advocate the use of implicit curves in motion animation. Implicit curves for task specification in motion animation is proposed and further explored in this paper. A planar implicit curve is used to represent the motion path while a speed profile curve is adopted to describe the motion timing. Then an approach and algorithm to motion inbetweening along implicit curves is proposed. Based on the motion path and motion speed, curve oriented inbetweening technique is utilized to generate inbetween position sequences in Cartesian space from which frame inbetweens in parametric space for animation are obtained by inverse kinematics.
{"title":"Implicit curve oriented inbetweening for motion animation","authors":"Haiyin Xu, Dan Li, Jian Wang","doi":"10.1145/1174429.1174443","DOIUrl":"https://doi.org/10.1145/1174429.1174443","url":null,"abstract":"While the parametric curve is overwhelmingly used in current character motion animation practice, the utilization of the implicit curve is recognized to be extremely potential for motion animation since the implicit surface have succeeded in such application as modeling, deformation and rendering within computer graphics and animation. In this paper, we advocate the use of implicit curves in motion animation. Implicit curves for task specification in motion animation is proposed and further explored in this paper. A planar implicit curve is used to represent the motion path while a speed profile curve is adopted to describe the motion timing. Then an approach and algorithm to motion inbetweening along implicit curves is proposed. Based on the motion path and motion speed, curve oriented inbetweening technique is utilized to generate inbetween position sequences in Cartesian space from which frame inbetweens in parametric space for animation are obtained by inverse kinematics.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115387736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}