A 3D visual analysis tool has been developed to add value to the SANDF's growing Ground Based Air Defence (GBAD) System of Systems simulation capability. A time based XML interface between the simulation and analysis tool, via a TCP connection or a log file, allows individual simulation objects to be wholly updated or partially modified. Live pause and review of the simulation action is supported by employing data key frames and compressed XML for enhanced performance. An innovative configurable filter tree allows visual clutter to be reduced as required and an open source scene graph (OpenSceneGraph) manages the 3D scene representation and rendering. A visualisation capability is developed for the effective presentation of the dynamic air defence system behaviour, system state transitions and inter-system communication. The visual analysis tool has successfully been applied in support of system performance experiments, tactical doctrine development and simulation support during training and live field exercises. The 3D visualisation resulted in improved situational awareness during experiment analysis, in increased involvement of the SANDF in experiment analysis and in improved credibility of analysis results presented during live or after action visual feedback sessions.
{"title":"A 3D visual analysis tool in support of the SANDF's growing ground based air defence simulation capability","authors":"B. Duvenhage, J. Delport, A. Louis","doi":"10.1145/1294685.1294692","DOIUrl":"https://doi.org/10.1145/1294685.1294692","url":null,"abstract":"A 3D visual analysis tool has been developed to add value to the SANDF's growing Ground Based Air Defence (GBAD) System of Systems simulation capability. A time based XML interface between the simulation and analysis tool, via a TCP connection or a log file, allows individual simulation objects to be wholly updated or partially modified. Live pause and review of the simulation action is supported by employing data key frames and compressed XML for enhanced performance. An innovative configurable filter tree allows visual clutter to be reduced as required and an open source scene graph (OpenSceneGraph) manages the 3D scene representation and rendering.\u0000 A visualisation capability is developed for the effective presentation of the dynamic air defence system behaviour, system state transitions and inter-system communication. The visual analysis tool has successfully been applied in support of system performance experiments, tactical doctrine development and simulation support during training and live field exercises. The 3D visualisation resulted in improved situational awareness during experiment analysis, in increased involvement of the SANDF in experiment analysis and in improved credibility of analysis results presented during live or after action visual feedback sessions.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114838595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a virtual reality framework (VRECKO) with an editor that is capable of creating new scenes or applications using this framework. The VRECKO system consists of objects with predefined behaviors that an application designer can dynamically change. With instances of a special object type called Ability, we may extend or change behaviors of objects in a scene. As an example of this approach, we present an editor that we implemented entirely as a set of abilities. The editing is done directly in 3D environment which has several benefits over the 2D editing, particularly the possibility to work with a scene exactly as in the final application.
{"title":"Extensible approach to the virtual worlds editing","authors":"V. Kovalcík, J. Flašar, Jirí Sochor","doi":"10.1145/1294685.1294691","DOIUrl":"https://doi.org/10.1145/1294685.1294691","url":null,"abstract":"We present a virtual reality framework (VRECKO) with an editor that is capable of creating new scenes or applications using this framework. The VRECKO system consists of objects with predefined behaviors that an application designer can dynamically change. With instances of a special object type called Ability, we may extend or change behaviors of objects in a scene. As an example of this approach, we present an editor that we implemented entirely as a set of abilities. The editing is done directly in 3D environment which has several benefits over the 2D editing, particularly the possibility to work with a scene exactly as in the final application.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117099724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present methods for automatically constructing representations of fiction books in a range of modalities: audibly, graphically and as 3D virtual environments. The correspondence between the sequential ordering of events against the order of events presented in the text is used to correctly resolve the dynamic interactions for each representation. Synthesised audio created from the fiction text is used to calibrate the base time-line against which the other forms of media are correctly aligned. The audio stream is based on speech synthesis using the text of the book, and is enhanced using distinct voices for the different characters in a book. Sound effects are included automatically. The graphical representation represents the text (as subtitles), identifies active characters and provides visual feedback of the content of the story. Dynamic virtual environments conform to the constraints implied by the story, and are used as a source of further visual content. These representations are all aligned to a common time-line, and combined using sequencing facilities to provide a multimodal version of the original text.
{"title":"Mechanisms for multimodality: taking fiction to another dimension","authors":"Kevin R. Glass, S. Bangay, B. Alcock","doi":"10.1145/1294685.1294708","DOIUrl":"https://doi.org/10.1145/1294685.1294708","url":null,"abstract":"We present methods for automatically constructing representations of fiction books in a range of modalities: audibly, graphically and as 3D virtual environments. The correspondence between the sequential ordering of events against the order of events presented in the text is used to correctly resolve the dynamic interactions for each representation. Synthesised audio created from the fiction text is used to calibrate the base time-line against which the other forms of media are correctly aligned. The audio stream is based on speech synthesis using the text of the book, and is enhanced using distinct voices for the different characters in a book. Sound effects are included automatically. The graphical representation represents the text (as subtitles), identifies active characters and provides visual feedback of the content of the story. Dynamic virtual environments conform to the constraints implied by the story, and are used as a source of further visual content. These representations are all aligned to a common time-line, and combined using sequencing facilities to provide a multimodal version of the original text.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122619163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a framework for real-time animation of explosions that runs completely on the GPU. The simulation allows for arbitrary internal boundaries and is governed by a combustion process, a Stable Fluid solver, which includes thermal expansion, and turbulence modeling. The simulation results are visualised by two particle systems rendered using animated textures. The results are physically based, non-repeating, and dynamic real-time explosions with high visual quality.
{"title":"Animating physically based explosions in real-time","authors":"L. Ek, Rune Vistnes, Odd Erik Gundersen","doi":"10.1145/1294685.1294696","DOIUrl":"https://doi.org/10.1145/1294685.1294696","url":null,"abstract":"We present a framework for real-time animation of explosions that runs completely on the GPU. The simulation allows for arbitrary internal boundaries and is governed by a combustion process, a Stable Fluid solver, which includes thermal expansion, and turbulence modeling. The simulation results are visualised by two particle systems rendered using animated textures. The results are physically based, non-repeating, and dynamic real-time explosions with high visual quality.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115977702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes an algorithm that provides fast propagation and real-time walkthrough for globally illuminated synthetic scenes. A type of light field data structure is used for propagating radiance outward from emitters through the scene, accounting for any kind of L(S|D) light path. The light field employed is constructed by choosing a regular point subdivision over a hemisphere, to give a set of directions, and then corresponding to each direction there is a rectangular grid of parallel rays. Each rectangular grid of rays is further subdivided into rectangular tiles, such that each tile references a sequence of 2D images containing outgoing radiances of surfaces intersected by the rays in that tile. We present a novel propagation algorithm running entirely on the Graphics Processing Unit (GPU). It is incremental in that it can resolve visibility along a set of parallel rays in O(n) time and can produce a light field for a moderately complex scene - with complex illumination stored in millions of elements - in minutes and for simpler scenes in seconds. It is approximate but gracefully converges to a correct solution as verified by comparing images with path traced counterparts. We show how to render globally lit images directly from the GPU data structure without CPU involvement at real-time frame rates and high resolutions.
{"title":"Light field propagation and rendering on the GPU","authors":"J. Mortensen, Pankaj Khanna, M. Slater","doi":"10.1145/1294685.1294688","DOIUrl":"https://doi.org/10.1145/1294685.1294688","url":null,"abstract":"This paper describes an algorithm that provides fast propagation and real-time walkthrough for globally illuminated synthetic scenes. A type of light field data structure is used for propagating radiance outward from emitters through the scene, accounting for any kind of L(S|D) light path. The light field employed is constructed by choosing a regular point subdivision over a hemisphere, to give a set of directions, and then corresponding to each direction there is a rectangular grid of parallel rays. Each rectangular grid of rays is further subdivided into rectangular tiles, such that each tile references a sequence of 2D images containing outgoing radiances of surfaces intersected by the rays in that tile. We present a novel propagation algorithm running entirely on the Graphics Processing Unit (GPU). It is incremental in that it can resolve visibility along a set of parallel rays in O(n) time and can produce a light field for a moderately complex scene - with complex illumination stored in millions of elements - in minutes and for simpler scenes in seconds. It is approximate but gracefully converges to a correct solution as verified by comparing images with path traced counterparts. We show how to render globally lit images directly from the GPU data structure without CPU involvement at real-time frame rates and high resolutions.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126824262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The simulation and animation of cloth has attracted considerable research interest by the computer graphics community. Cloth that behaves realistically is already expected in animated films, and real-time applications are certain to follow. A common challenge faced when simulating the complex behaviour of cloth, especially at interactive frame rates, is maintaining an acceptable level of realism while keeping computation time to a minimum. A common method of increasing the efficiency is a decrease in the number of nodes controlling the cloth movement, sacrificing details that could only be obtained using a dense discretization of the cloth. A simple and efficient method to simulate cloth is the mass-spring system which utilises a regular grid of vertices, representing discrete points along the cloth's surface. The structure of geometry images is similar, which makes them an ideal choice for representing arbitrary surface meshes in a cloth simulator whilst retaining the efficiency of a mass-spring system. In this paper we present a novel method to apply geometry images to cloth simulation in order to obtain cloth motion for surface meshes of arbitrary genus, while retaining the simplicity of a mass-spring model. We also adapt an implicit/explicit integration scheme, utilising the regular structure of geometry images, to improve performance. Additionally, the cloth is able to drape over other objects, also represented as geometry images. Our method is efficient enough to allow for fairly dense cloth meshes to be simulated in real-time.
{"title":"Cloth simulation and collision detection using geometry images","authors":"Nico Zink, A. Hardy","doi":"10.1145/1294685.1294716","DOIUrl":"https://doi.org/10.1145/1294685.1294716","url":null,"abstract":"The simulation and animation of cloth has attracted considerable research interest by the computer graphics community. Cloth that behaves realistically is already expected in animated films, and real-time applications are certain to follow. A common challenge faced when simulating the complex behaviour of cloth, especially at interactive frame rates, is maintaining an acceptable level of realism while keeping computation time to a minimum. A common method of increasing the efficiency is a decrease in the number of nodes controlling the cloth movement, sacrificing details that could only be obtained using a dense discretization of the cloth. A simple and efficient method to simulate cloth is the mass-spring system which utilises a regular grid of vertices, representing discrete points along the cloth's surface. The structure of geometry images is similar, which makes them an ideal choice for representing arbitrary surface meshes in a cloth simulator whilst retaining the efficiency of a mass-spring system. In this paper we present a novel method to apply geometry images to cloth simulation in order to obtain cloth motion for surface meshes of arbitrary genus, while retaining the simplicity of a mass-spring model. We also adapt an implicit/explicit integration scheme, utilising the regular structure of geometry images, to improve performance. Additionally, the cloth is able to drape over other objects, also represented as geometry images. Our method is efficient enough to allow for fairly dense cloth meshes to be simulated in real-time.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128392191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel method for labeling line features in interactive virtual 3D environments. It embeds labels into the surfaces of the annotated objects, whereas occlusion by other scene elements is minimized and overlaps between labels are resolved. Embedded labels provide a high correlation between label and annotated object -- they are specifically useful in environments, where available screen-space for annotations is limited (e.g., small displays). To determine optimal positions for the annotation of line features, the degree of occlusion for each position is estimated during the real-time rendering process. We discuss a number of sampling schemes that are used to approximate the visibility measure, including an adapted variant that is particularly suitable for the integration of text based on Latin alphabets. Overlaps between embedded labels are resolved with a conflict graph, which is calculated in a preprocessing step and stores all possible overlap conflicts. To prove the applicability of our approach, we have implemented a prototype application that visualizes street names as embedded labels within a 3D virtual city model in real-time.
{"title":"Embedded labels for line features in interactive 3D virtual environments","authors":"S. Maass, J. Döllner","doi":"10.1145/1294685.1294695","DOIUrl":"https://doi.org/10.1145/1294685.1294695","url":null,"abstract":"This paper presents a novel method for labeling line features in interactive virtual 3D environments. It embeds labels into the surfaces of the annotated objects, whereas occlusion by other scene elements is minimized and overlaps between labels are resolved. Embedded labels provide a high correlation between label and annotated object -- they are specifically useful in environments, where available screen-space for annotations is limited (e.g., small displays). To determine optimal positions for the annotation of line features, the degree of occlusion for each position is estimated during the real-time rendering process. We discuss a number of sampling schemes that are used to approximate the visibility measure, including an adapted variant that is particularly suitable for the integration of text based on Latin alphabets. Overlaps between embedded labels are resolved with a conflict graph, which is calculated in a preprocessing step and stores all possible overlap conflicts. To prove the applicability of our approach, we have implemented a prototype application that visualizes street names as embedded labels within a 3D virtual city model in real-time.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133490027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A variation on the interpolatory subdivision scheme [Labsik and Greiner 2000] is presented based on √3 subdivision and harmonic interpolation. Harmonic interpolation is generalized to triangle meshes based on a distance representation of the basis functions. The harmonic surface is approximated by limiting the support of the basis functions and the resulting surface is shown to satisfy necessary conditions for continuity. We provide subdivision rules for vertices of valence 3, 4 and 6 that can be applied directly to obtain a smooth surface. Other valences are handled as described in the literature. The resulting algorithm is easily implemented due to √3 subdivision and the simplicity of the stencils involved.
提出了一种基于√3细分和谐波插值的插值细分方案[Labsik and Greiner 2000]的变体。基于基函数的距离表示,将调和插值推广到三角形网格。通过限制基函数的支持来近似调和曲面,并证明了调和曲面满足连续性的必要条件。我们提供了价3、价4和价6顶点的细分规则,这些规则可以直接应用于获得光滑表面。其他价目按文献中所述处理。由于√3细分和所涉及的模板的简单性,所得到的算法很容易实现。
{"title":"Interpolatory √3 subdivision with harmonic interpolation","authors":"A. Hardy","doi":"10.1145/1294685.1294701","DOIUrl":"https://doi.org/10.1145/1294685.1294701","url":null,"abstract":"A variation on the interpolatory subdivision scheme [Labsik and Greiner 2000] is presented based on √3 subdivision and harmonic interpolation. Harmonic interpolation is generalized to triangle meshes based on a distance representation of the basis functions. The harmonic surface is approximated by limiting the support of the basis functions and the resulting surface is shown to satisfy necessary conditions for continuity. We provide subdivision rules for vertices of valence 3, 4 and 6 that can be applied directly to obtain a smooth surface. Other valences are handled as described in the literature. The resulting algorithm is easily implemented due to √3 subdivision and the simplicity of the stencils involved.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122006482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Surface structures at meso- and micro-scale are almost impossible to convincingly reproduce with analytical BRDFs. Therefore, image-based methods like light fields, surface light fields, reflectance fields and bidirectional texture functions became widely accepted to represent spatially nonuniform surfaces. For all of these techniques a set of input photographs from varying view and/or light directions is taken that usually by far exceeds the available graphics memory. The recent development of HDR photography additionally increased the amount of data generated by current acquisition systems since every image needs to be stored as an array of floating point numbers. Furthermore, statistical compression methods -- like principal component analysis (PCA) -- that are commonly used for compression are optimal for linearly distributed values and thus cannot handle the high dynamic range radiance values appropriately. In this paper, we address both of these problems introduced by the acquisition of high dynamic range light and reflectance fields. Instead of directly compressing the radiance data with a truncated PCA, a non-linear transformation is applied to input values in advance to assure an almost uniform distribution. This does not only significantly improve the approximation quality after an arbitrary tone mapping operator is applied to the reconstructed HDR images, but also allows to efficiently quantize the principal components and even apply hardware-supported texture compression without much further loss of quality. Thus, in addition to the improved visual quality, the storage requirements are reduced by more than an order of magnitude.
{"title":"High dynamic range preserving compression of light fields and reflectance fields","authors":"N. Menzel, M. Guthe","doi":"10.1145/1294685.1294697","DOIUrl":"https://doi.org/10.1145/1294685.1294697","url":null,"abstract":"Surface structures at meso- and micro-scale are almost impossible to convincingly reproduce with analytical BRDFs. Therefore, image-based methods like light fields, surface light fields, reflectance fields and bidirectional texture functions became widely accepted to represent spatially nonuniform surfaces. For all of these techniques a set of input photographs from varying view and/or light directions is taken that usually by far exceeds the available graphics memory. The recent development of HDR photography additionally increased the amount of data generated by current acquisition systems since every image needs to be stored as an array of floating point numbers. Furthermore, statistical compression methods -- like principal component analysis (PCA) -- that are commonly used for compression are optimal for linearly distributed values and thus cannot handle the high dynamic range radiance values appropriately.\u0000 In this paper, we address both of these problems introduced by the acquisition of high dynamic range light and reflectance fields. Instead of directly compressing the radiance data with a truncated PCA, a non-linear transformation is applied to input values in advance to assure an almost uniform distribution. This does not only significantly improve the approximation quality after an arbitrary tone mapping operator is applied to the reconstructed HDR images, but also allows to efficiently quantize the principal components and even apply hardware-supported texture compression without much further loss of quality. Thus, in addition to the improved visual quality, the storage requirements are reduced by more than an order of magnitude.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121370259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The limitation of resource on mobile devices makes providing real-time, realistic 3D graphics on local become a challenging task. Recent researches focus on remote rendering which is not good in interaction, and simple rendering on local which is not good in rendering quality. As for this challenge, this paper presents a new multiresolution object space point-based rendering approach for mobile devices local rendering. The approach use hierarchical clustering to create a hierarchy of bounding volumes, in addition, we use curvature sampling to reduce more amounts of sample points and give a rapid LOD selection algorithm. Then use view-independent object space surface splatting as the rendering primitives which can provide good rendering quality. Experiment results show that this approach uses less time and gets better rendering quality for mobile devices.
{"title":"A multiresolution object space point-based rendering approach for mobile devices","authors":"Zhiying He, Xiaohui Liang","doi":"10.1145/1294685.1294687","DOIUrl":"https://doi.org/10.1145/1294685.1294687","url":null,"abstract":"The limitation of resource on mobile devices makes providing real-time, realistic 3D graphics on local become a challenging task. Recent researches focus on remote rendering which is not good in interaction, and simple rendering on local which is not good in rendering quality. As for this challenge, this paper presents a new multiresolution object space point-based rendering approach for mobile devices local rendering. The approach use hierarchical clustering to create a hierarchy of bounding volumes, in addition, we use curvature sampling to reduce more amounts of sample points and give a rapid LOD selection algorithm. Then use view-independent object space surface splatting as the rendering primitives which can provide good rendering quality. Experiment results show that this approach uses less time and gets better rendering quality for mobile devices.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128755172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}