We explore the design space of using direct finger input in conjunction with a deformable physical prop for the creation and manipulation of conceptual 3D geometric models. The user sculpts virtual models by manipulating the space on, into, and around the physical prop, in an extension of the types of manipulations one would perform on traditional modeling media such as clay or foam. The prop acts as a proxy to the virtual model and hence as a frame of reference to the user's fingers. A prototype implementation uses camera-based motion tracking technology to track passive markers on the fingers and prop. The interface supports a variety of clay-like sculpting operations including deforming, smoothing, pasting, and extruding. All operations are performed using the unconstrained fingers, with command input enabled by a small set of finger gestures coupled with on-screen widgets.
{"title":"An interface for virtual 3D sculpting via physical proxy","authors":"Jia Sheng, Ravin Balakrishnan, K. Singh","doi":"10.1145/1174429.1174467","DOIUrl":"https://doi.org/10.1145/1174429.1174467","url":null,"abstract":"We explore the design space of using direct finger input in conjunction with a deformable physical prop for the creation and manipulation of conceptual 3D geometric models. The user sculpts virtual models by manipulating the space on, into, and around the physical prop, in an extension of the types of manipulations one would perform on traditional modeling media such as clay or foam. The prop acts as a proxy to the virtual model and hence as a frame of reference to the user's fingers. A prototype implementation uses camera-based motion tracking technology to track passive markers on the fingers and prop. The interface supports a variety of clay-like sculpting operations including deforming, smoothing, pasting, and extruding. All operations are performed using the unconstrained fingers, with command input enabled by a small set of finger gestures coupled with on-screen widgets.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132847713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual Humans creation aims to provide virtual characters with realistic behavior, which implies endowing them with autonomy in an inhabited virtual environment. Autonomous behavior consists in interacting with users or the environment and reacting to stimulus or events. Reactions are unconscious behaviors which are not often implemented in virtual humans. Frequently, virtual humans show repetitive and robotic movements which tend to decrease realism.To improve believability in virtual humans we need to provide individuality. Individualization is achieved by using human characteristics like personality, gender, emotions, etc. In this paper, we propose to use those individual descriptors to synthesize different kinds of reactions. We aim that individualized virtual humans react in a different way to the same stimuli. This approach is achieved by observing real people reacting. Thanks to those observations, we stereotyped reactive movements that can be described by individual characteristics. We use inverse kinematics techniques to synthesize the movements. This allows us to change reaction movements according to the characteristics of the stimuli and to the individuality of a character.
{"title":"Individualized reaction movements for virtual humans","authors":"Alejandra García-Rojas, F. Vexo, D. Thalmann","doi":"10.1145/1174429.1174442","DOIUrl":"https://doi.org/10.1145/1174429.1174442","url":null,"abstract":"Virtual Humans creation aims to provide virtual characters with realistic behavior, which implies endowing them with autonomy in an inhabited virtual environment. Autonomous behavior consists in interacting with users or the environment and reacting to stimulus or events. Reactions are unconscious behaviors which are not often implemented in virtual humans. Frequently, virtual humans show repetitive and robotic movements which tend to decrease realism.To improve believability in virtual humans we need to provide individuality. Individualization is achieved by using human characteristics like personality, gender, emotions, etc. In this paper, we propose to use those individual descriptors to synthesize different kinds of reactions. We aim that individualized virtual humans react in a different way to the same stimuli. This approach is achieved by observing real people reacting. Thanks to those observations, we stereotyped reactive movements that can be described by individual characteristics. We use inverse kinematics techniques to synthesize the movements. This allows us to change reaction movements according to the characteristics of the stimuli and to the individuality of a character.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132354858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years many Tone Mapping Operators (TMOs) have been presented in order to display High Dynamic Range Images (HDRI) on typical display devices. TMOs compress the luminance range while trying to maintain contrast. The dual of tone mapping, inverse tone mapping, expands a Low Dynamic Range Image (LDRI) into a HDRI. HDRIs contain a broader range of physical values that can be perceived by the human visual system. The majority of today's media is stored in low dynamic range. Inverse Tone Mapping Operators (iTMOs) could thus potentially revive all of this content for use in high dynamic range display and image-based lighting. We propose an approximate solution to this problem that uses median-cut to find the areas considered of high luminance and subsequently apply a density estimation to generate an Expand-map in order to extend the range in the high luminance areas using an inverse Photographic Tone Reproduction operator.
{"title":"Inverse tone mapping","authors":"F. Banterle","doi":"10.1145/1174429.1174489","DOIUrl":"https://doi.org/10.1145/1174429.1174489","url":null,"abstract":"In recent years many Tone Mapping Operators (TMOs) have been presented in order to display High Dynamic Range Images (HDRI) on typical display devices. TMOs compress the luminance range while trying to maintain contrast. The dual of tone mapping, inverse tone mapping, expands a Low Dynamic Range Image (LDRI) into a HDRI. HDRIs contain a broader range of physical values that can be perceived by the human visual system. The majority of today's media is stored in low dynamic range. Inverse Tone Mapping Operators (iTMOs) could thus potentially revive all of this content for use in high dynamic range display and image-based lighting. We propose an approximate solution to this problem that uses median-cut to find the areas considered of high luminance and subsequently apply a density estimation to generate an Expand-map in order to extend the range in the high luminance areas using an inverse Photographic Tone Reproduction operator.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"245 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115588140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visibility determination is one of the oldest problems in computer graphics. The visibility, in terms of back-to-front polygon visibility ordering, can be determined by updating a priority list as the viewpoint moves. A new list-priority algorithm, utilizing a property of Voronoi diagrams, is proposed in this paper. In the preprocessing phase, the 3D space is divided into Voronoi cells in order to cluster polygons that can be assigned a fixed set of priority orders within the cluster. and during the post-processing phase, the clusters and contained polygons are depth-sorted correctly. The most time-consuming work is undertaken during the pre-processing phase that only has to be executed once for the scene. All the polygons in a cluster are pre-computed to obtain the view independent priority order within the cluster. Thus, a relatively simple task is left in the post-processing phase, which is only to sort the clusters repeatedly when the viewpoint is changed. One reason to explore list-priority algorithm is because they offer flexibility that hardware configuration (such as Z-buffer approach) do not possess. One example is that of rendering with the correct treatment of the translucency effects. Translucency is an important graphics effect that can be used to increase the realism of the rendered scene or to enable more effective visual inspection in visualization.
{"title":"Voronoi diagram depth sorting for polygon visibility ordering","authors":"S. Fukushige, Hiromasa Suzuki","doi":"10.1145/1174429.1174506","DOIUrl":"https://doi.org/10.1145/1174429.1174506","url":null,"abstract":"Visibility determination is one of the oldest problems in computer graphics. The visibility, in terms of back-to-front polygon visibility ordering, can be determined by updating a priority list as the viewpoint moves. A new list-priority algorithm, utilizing a property of Voronoi diagrams, is proposed in this paper. In the preprocessing phase, the 3D space is divided into Voronoi cells in order to cluster polygons that can be assigned a fixed set of priority orders within the cluster. and during the post-processing phase, the clusters and contained polygons are depth-sorted correctly. The most time-consuming work is undertaken during the pre-processing phase that only has to be executed once for the scene. All the polygons in a cluster are pre-computed to obtain the view independent priority order within the cluster. Thus, a relatively simple task is left in the post-processing phase, which is only to sort the clusters repeatedly when the viewpoint is changed. One reason to explore list-priority algorithm is because they offer flexibility that hardware configuration (such as Z-buffer approach) do not possess. One example is that of rendering with the correct treatment of the translucency effects. Translucency is an important graphics effect that can be used to increase the realism of the rendered scene or to enable more effective visual inspection in visualization.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114368993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang-Wai Chow, R. Pose, Matthew J. P. Regan, J. Phillips
Perceptually based computer graphics techniques attempt to take advantage of limitations in the human visual system to improve system performance. This paper investigates the distortions caused by the implementation of a technique known as region warping from the human visual perception perspective. Region warping was devised in conjunction with other techniques to facilitate priority rendering for a virtual reality Address Recalculation Pipeline (ARP) system. The ARP is a graphics display architecture designed to reduce user head rotational latency in immersive Head Mounted Display (HMD) virtual reality. Priority rendering was developed for use with the ARP system to reduce the overall rendering load. Large object segmentation, region priority rendering and region warping are techniques that have been introduced to assist priority rendering and to further reduce the overall rendering load. Region warping however causes slight distortions to appear in the graphics. While this technique might improve system performance, the human experience and perception of the system cannot be neglected. This paper presents results of two experiments that address issues raised by our previous studies. In particular, these experiments investigate whether anti-aliasing and virtual environments with different scene complexities might affect a user's visual perception of region warping distortions.
{"title":"Human visual perception of region warping distortions with different display and scene characteristics","authors":"Yang-Wai Chow, R. Pose, Matthew J. P. Regan, J. Phillips","doi":"10.1145/1174429.1174490","DOIUrl":"https://doi.org/10.1145/1174429.1174490","url":null,"abstract":"Perceptually based computer graphics techniques attempt to take advantage of limitations in the human visual system to improve system performance. This paper investigates the distortions caused by the implementation of a technique known as region warping from the human visual perception perspective. Region warping was devised in conjunction with other techniques to facilitate priority rendering for a virtual reality Address Recalculation Pipeline (ARP) system. The ARP is a graphics display architecture designed to reduce user head rotational latency in immersive Head Mounted Display (HMD) virtual reality. Priority rendering was developed for use with the ARP system to reduce the overall rendering load. Large object segmentation, region priority rendering and region warping are techniques that have been introduced to assist priority rendering and to further reduce the overall rendering load. Region warping however causes slight distortions to appear in the graphics. While this technique might improve system performance, the human experience and perception of the system cannot be neglected. This paper presents results of two experiments that address issues raised by our previous studies. In particular, these experiments investigate whether anti-aliasing and virtual environments with different scene complexities might affect a user's visual perception of region warping distortions.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123035177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, we present an efficient algorithm for encoding the connectivity information of general polygon meshes. The algorithm is a single-resolution lossless compression method for meshes, mainly for non-triangular meshes. In comparison with the excellent algorithms previously proposed for non-triangular meshes, the new method highly improves the compression ratio by using a novel entropy coding method. By the method, Huffman coder is first applied, then a context-based arithmetic coder is employed to encode the Huffman codes.The new method also puts forward a novel mesh traversing method by which the traversal to each polygon face could be in multiple times, though encoding each face is still only once. In this new method, "jump" operations are added to replacing "split" operations commonly used in various existing connectivity compression algorithms. Much of the decoding time and space could be saved by using the new traversing method through taking advantage of a decoding scheme that the operator code could be immediately discarded as soon as it is decoded. Therefore, the decoding method could be well applied to the applications with online transmission and decoding. In another word, our algorithm has an advantage of parallel encoding and decoding. The algorithm is also capable of handling the meshes with holes.
{"title":"Connectivity compression for non-triangular meshes by context-based arithmetic coding","authors":"Y. Liu, E. Wu","doi":"10.1145/1174429.1174498","DOIUrl":"https://doi.org/10.1145/1174429.1174498","url":null,"abstract":"In this article, we present an efficient algorithm for encoding the connectivity information of general polygon meshes. The algorithm is a single-resolution lossless compression method for meshes, mainly for non-triangular meshes. In comparison with the excellent algorithms previously proposed for non-triangular meshes, the new method highly improves the compression ratio by using a novel entropy coding method. By the method, Huffman coder is first applied, then a context-based arithmetic coder is employed to encode the Huffman codes.The new method also puts forward a novel mesh traversing method by which the traversal to each polygon face could be in multiple times, though encoding each face is still only once. In this new method, \"jump\" operations are added to replacing \"split\" operations commonly used in various existing connectivity compression algorithms. Much of the decoding time and space could be saved by using the new traversing method through taking advantage of a decoding scheme that the operator code could be immediately discarded as soon as it is decoded. Therefore, the decoding method could be well applied to the applications with online transmission and decoding. In another word, our algorithm has an advantage of parallel encoding and decoding. The algorithm is also capable of handling the meshes with holes.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129515847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Face recognition is highly dependent on two stages that are image preprocessing and classification. Methods for feature extraction and classification have been investigated. Through the investigations a method that uses Bayesian Network for feature extraction and Backpropagation algorithm for classification has been proposed. A prototype of the system was implemented and experiments were carried out. Different set of parameters were used for each experiment. Parameters involved were the learning rate, momentum rate and the number of training cycle. Results were satisfactory. The most outstanding performance shows that 78% successful recognition has been achieved with the feature extraction process and 70% without the feature extraction process.
{"title":"Face feature extraction using Bayesian network","authors":"Zulkifli Dol, R. A. Salam, Z. Zainol","doi":"10.1145/1174429.1174475","DOIUrl":"https://doi.org/10.1145/1174429.1174475","url":null,"abstract":"Face recognition is highly dependent on two stages that are image preprocessing and classification. Methods for feature extraction and classification have been investigated. Through the investigations a method that uses Bayesian Network for feature extraction and Backpropagation algorithm for classification has been proposed. A prototype of the system was implemented and experiments were carried out. Different set of parameters were used for each experiment. Parameters involved were the learning rate, momentum rate and the number of training cycle. Results were satisfactory. The most outstanding performance shows that 78% successful recognition has been achieved with the feature extraction process and 70% without the feature extraction process.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"308 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124389734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
3D computer animation often struggles to compete with the flexibility and expressiveness commonly found in traditional animation, particularly when rendered non-photorealistically. We present an animation tool that takes skeleton-driven 3D computer animations and generates expressive deformations to the character geometry. The technique is based upon the cartooning and animation concepts of 'lines of action' and 'lines of motion' and automatically infuses computer animations with some of the expressiveness displayed by traditional animation. Motion and pose-based expressive deformations are generated from the motion data and the character geometry is warped along each limb's individual line of motion. The effect of this subtle, yet significant, warping is twofold: geometric inter-frame consistency is increased which helps create visually smoother animated sequences, and the warped geometry provides a novel solution to the problem of implied motion in non-photorealistic still images.
{"title":"Automatic expressive deformations for stylizing motion","authors":"P. Noble, Wen Tang","doi":"10.1145/1174429.1174438","DOIUrl":"https://doi.org/10.1145/1174429.1174438","url":null,"abstract":"3D computer animation often struggles to compete with the flexibility and expressiveness commonly found in traditional animation, particularly when rendered non-photorealistically. We present an animation tool that takes skeleton-driven 3D computer animations and generates expressive deformations to the character geometry. The technique is based upon the cartooning and animation concepts of 'lines of action' and 'lines of motion' and automatically infuses computer animations with some of the expressiveness displayed by traditional animation. Motion and pose-based expressive deformations are generated from the motion data and the character geometry is warped along each limb's individual line of motion. The effect of this subtle, yet significant, warping is twofold: geometric inter-frame consistency is increased which helps create visually smoother animated sequences, and the warped geometry provides a novel solution to the problem of implied motion in non-photorealistic still images.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"699 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122984341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The fundamental drawback of current stereo and multi-view visualization is the necessity to perform multi pass rendering (one pass for each view) and subsequent image composition + masking for generating multiple stereo views. Thus the rendering time increases in general linearly with the number of views.In this paper we introduce a new method for multi-view splatting based on deferred blending. Our method exploits the programma-bility of modern graphic processing units (GPUs) for rendering multiple stereo views in a single rendering pass. The views are calculated directly on the GPU including sub-pixel wavelength selective views. We describe our algorithm precisely and provide details about its implementation. Experimental results demonstrate the performance advantage of our multi-view point splatting algorithm compared to the standard multi-pass approach.
{"title":"Multi-view point splatting","authors":"T. Hübner, Yanci Zhang, R. Pajarola","doi":"10.1145/1174429.1174479","DOIUrl":"https://doi.org/10.1145/1174429.1174479","url":null,"abstract":"The fundamental drawback of current stereo and multi-view visualization is the necessity to perform multi pass rendering (one pass for each view) and subsequent image composition + masking for generating multiple stereo views. Thus the rendering time increases in general linearly with the number of views.In this paper we introduce a new method for multi-view splatting based on deferred blending. Our method exploits the programma-bility of modern graphic processing units (GPUs) for rendering multiple stereo views in a single rendering pass. The views are calculated directly on the GPU including sub-pixel wavelength selective views. We describe our algorithm precisely and provide details about its implementation. Experimental results demonstrate the performance advantage of our multi-view point splatting algorithm compared to the standard multi-pass approach.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129184417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Venkataraman, W. Benger, A. Long, Byungil Jeong, L. Renambot
The onslaught of Hurricane Katrina has highlighted the need for effective information display. Visualization of geoscientific data faces challenges of size, integration and representation. Rendering methods need to cope with the surge of data due to advancements in acquisition techniques and computing power. Moreover, data stemming from different application communities are not compatible a-priori. Holistic representations are important to communicate the causes and impact of natural catastrophes to the scientists themselves, decision-makers and the general public. To address these issues, we have developed efficient data layout mechanisms to ensure fast and uniform access to diverse data. We apply effective rendering techniques that intuitively and interactively convey the phenomena. Finally, we discuss the use of high-resolution displays connected via high-speed networks to support collaboration. These components establish a framework for application in hurricane research, coastal modeling and beyond.
{"title":"Visualizing Hurricane Katrina: large data management, rendering and display challenges","authors":"S. Venkataraman, W. Benger, A. Long, Byungil Jeong, L. Renambot","doi":"10.1145/1174429.1174465","DOIUrl":"https://doi.org/10.1145/1174429.1174465","url":null,"abstract":"The onslaught of Hurricane Katrina has highlighted the need for effective information display. Visualization of geoscientific data faces challenges of size, integration and representation. Rendering methods need to cope with the surge of data due to advancements in acquisition techniques and computing power. Moreover, data stemming from different application communities are not compatible a-priori. Holistic representations are important to communicate the causes and impact of natural catastrophes to the scientists themselves, decision-makers and the general public. To address these issues, we have developed efficient data layout mechanisms to ensure fast and uniform access to diverse data. We apply effective rendering techniques that intuitively and interactively convey the phenomena. Finally, we discuss the use of high-resolution displays connected via high-speed networks to support collaboration. These components establish a framework for application in hurricane research, coastal modeling and beyond.","PeriodicalId":360852,"journal":{"name":"Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114330104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}