F. Tyndiuk, V. Lespinet-Najib, G. Thomas, C. Schlick
A better understanding of how users perform virtual relaity tasks may help to build better virtual reality interfaces. In this study, we concentrate on the impact of large displays in virtual reality depending on the tasks and users' characteristics. The two virtual reality tasks studied are the objects manipulation and the navigation in an environment. The users' characteristics studied are the visual attention abilities. Forty subjects participated in the experimentation composed of cognitive tests used to evaluate visual attentional abilities and a set of virtual reality tasks. Our study exhibits two main conclusions. (i) Large displays positively impact on performances for some kinds of virtual reality tasks. (ii) Users with low level of attentional abilities take more advantage of large displays. We conclude that large displays can be considered as cognitive aids depending on the tasks and users' characteristics.
{"title":"Impact of large displays on virtual reality task performance","authors":"F. Tyndiuk, V. Lespinet-Najib, G. Thomas, C. Schlick","doi":"10.1145/1029949.1029960","DOIUrl":"https://doi.org/10.1145/1029949.1029960","url":null,"abstract":"A better understanding of how users perform virtual relaity tasks may help to build better virtual reality interfaces. In this study, we concentrate on the impact of large displays in virtual reality depending on the tasks and users' characteristics. The two virtual reality tasks studied are the objects manipulation and the navigation in an environment. The users' characteristics studied are the visual attention abilities. Forty subjects participated in the experimentation composed of cognitive tests used to evaluate visual attentional abilities and a set of virtual reality tasks. Our study exhibits two main conclusions. (i) Large displays positively impact on performances for some kinds of virtual reality tasks. (ii) Users with low level of attentional abilities take more advantage of large displays. We conclude that large displays can be considered as cognitive aids depending on the tasks and users' characteristics.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125956588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Hérisson, Pierre-Emmanuel Gros, N. Férey, Olivier Magneau, R. Gherbi
In this paper, we address the potential offered by Virtual Reality and scientific simulation for 3D modeling and immersive visualization of huge genomic sequences. Advanced work on 3D data modeling and structuring is proposed. In Bioinformatics, DNA sequences are often represented within linear format. However, they also have a three-dimensional structure potentially suitable for genomic analysis. The representation of such 3D structure allows biologists to observe and analyze genomes in an interactive way at different levels: from gene to chromosome. We developed a powerful software platform that provides a new point of view for sequences analysis: ADN-Viewer. Nevertheless, a classical eukaryotic chromosome of 40 million base pairs requires about 6 Gbytes of 3D data. In order to manage these huge masses of data in real-time, we designed various scene management algorithms and immersive human-computer interaction for user-friendly data exploration.
{"title":"DNA in Virtuo visualization and exploration of 3D genomic structures","authors":"J. Hérisson, Pierre-Emmanuel Gros, N. Férey, Olivier Magneau, R. Gherbi","doi":"10.1145/1029949.1029955","DOIUrl":"https://doi.org/10.1145/1029949.1029955","url":null,"abstract":"In this paper, we address the potential offered by Virtual Reality and scientific simulation for 3D modeling and immersive visualization of huge genomic sequences. Advanced work on 3D data modeling and structuring is proposed. In Bioinformatics, DNA sequences are often represented within linear format. However, they also have a three-dimensional structure potentially suitable for genomic analysis. The representation of such 3D structure allows biologists to observe and analyze genomes in an interactive way at different levels: from gene to chromosome. We developed a powerful software platform that provides a new point of view for sequences analysis: <i>ADN-Viewer</i>. Nevertheless, a classical eukaryotic chromosome of 40 million base pairs requires about 6 Gbytes of 3D data. In order to manage these huge masses of data in real-time, we designed various scene management algorithms and immersive human-computer interaction for <i>user-friendly</i> data exploration.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124872096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Models of implicit surfaces and curves trimmed by a solid are discussed in the context of dimensionally heterogeneous object modeling. Both a carrier surface and a trimming solid are modeled using the function representation. Algorithms for polygonization of trimmed surfaces and curves, as well as ray-tracing of trimmed surfaces are described. Illustrative and CAD related examples are given.
{"title":"Rendering trimmed implicit surfaces and curves","authors":"B. Schmitt, G. Pasko, A. Pasko, T. Kunii","doi":"10.1145/1029949.1029951","DOIUrl":"https://doi.org/10.1145/1029949.1029951","url":null,"abstract":"Models of implicit surfaces and curves trimmed by a solid are discussed in the context of dimensionally heterogeneous object modeling. Both a carrier surface and a trimming solid are modeled using the function representation. Algorithms for polygonization of trimmed surfaces and curves, as well as ray-tracing of trimmed surfaces are described. Illustrative and CAD related examples are given.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122480701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We develop or enhance hair modelling and rendering techniques to produce three different forms of hair commonly found in African hairstyles. The forms of hair are natural curly hair, straightened hair, and braids or twists of hair. We use an implicit model, implemented as a series of textured layers to represent curly hair. Straightened hair is represented explicitly, and modelled by defining and replicating a few control hairs. Braids and twists are implemented as textured generalized cylinders. A synthesis of existing hair illumination models is used as a basis for an African hair illumination model. Parameter values to match African hair characteristics are discussed. A number of complete African hairstyles are shown, demonstrating that the techniques can be used to model and render African hair successfully.
{"title":"Modelling and rendering techniques for African hairstyles","authors":"D. Patrick, S. Bangay, A. Lobb","doi":"10.1145/1029949.1029971","DOIUrl":"https://doi.org/10.1145/1029949.1029971","url":null,"abstract":"We develop or enhance hair modelling and rendering techniques to produce three different forms of hair commonly found in African hairstyles. The forms of hair are natural curly hair, straightened hair, and braids or twists of hair.\u0000 We use an implicit model, implemented as a series of textured layers to represent curly hair. Straightened hair is represented explicitly, and modelled by defining and replicating a few control hairs. Braids and twists are implemented as textured generalized cylinders.\u0000 A synthesis of existing hair illumination models is used as a basis for an African hair illumination model. Parameter values to match African hair characteristics are discussed.\u0000 A number of complete African hairstyles are shown, demonstrating that the techniques can be used to model and render African hair successfully.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115543943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simulations of different configurations of the symmetrical tapered kaleidoscope are performed to assess their merits for measurement of BRDFs and BTFs. The relationship between optimal kaleidoscope layout, and factors such as hardware restrictions and the resolution of the required reflectance function, is derived. The effect on the measurement of the reflectance function of changing these independent variables is examined through the simulation. These experiments highlight issues affecting the measurement of BTFs using kaleidoscopes, and suggest configurations that allow sampling at regular parameter intervals. A number of other kaleidoscope architectures are explored, which offer the benefits of potentially doubling the range of directions that can be sampled, and allowing adaptive control of sample intervals.
{"title":"Kaleidoscope configurations for reflectance measurement","authors":"S. Bangay, J. Radloff","doi":"10.1145/1029949.1029979","DOIUrl":"https://doi.org/10.1145/1029949.1029979","url":null,"abstract":"Simulations of different configurations of the symmetrical tapered kaleidoscope are performed to assess their merits for measurement of BRDFs and BTFs. The relationship between optimal kaleidoscope layout, and factors such as hardware restrictions and the resolution of the required reflectance function, is derived. The effect on the measurement of the reflectance function of changing these independent variables is examined through the simulation. These experiments highlight issues affecting the measurement of BTFs using kaleidoscopes, and suggest configurations that allow sampling at regular parameter intervals. A number of other kaleidoscope architectures are explored, which offer the benefits of potentially doubling the range of directions that can be sampled, and allowing adaptive control of sample intervals.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128651060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe techniques for exploring 3D scenes by combining non-linear projections with the interactive data mining of camera navigations from previous explorations. Our approach is motivated by two key observations: First, that there is a wealth of information in prior explorations of a scene that can assist in future presentations of the same scene. Second, current linear perspective camera models produce images that are too limited to adequately capture the complexity of many 3D scenes. The contributions of this paper are two-fold. First, we show how spatial and temporal subdivision schemes can be used to store camera navigation information that is data mined and clustered to be interactively applicable to a number of existing techniques. Second, we show how the movement of a traditional linear perspective camera is closely tied to non-linear projections that combine space and time. As a result, we present a coherent system where the navigation of a conventional camera is data mined to provide both the understandability of linear perspective and the flexibility of non-linear projection of a 3D scene in real-time. Our system's generality is illustrated by three visualization techniques built with a single data mining and projection infrastructure.
{"title":"Visualizing 3D scenes using non-linear projections and data mining of previous camera movements","authors":"Karan Singh, Ravin Balakrishnan","doi":"10.1145/1029949.1029956","DOIUrl":"https://doi.org/10.1145/1029949.1029956","url":null,"abstract":"We describe techniques for exploring 3D scenes by combining non-linear projections with the interactive data mining of camera navigations from previous explorations. Our approach is motivated by two key observations: First, that there is a wealth of information in prior explorations of a scene that can assist in future presentations of the same scene. Second, current linear perspective camera models produce images that are too limited to adequately capture the complexity of many 3D scenes. The contributions of this paper are two-fold. First, we show how spatial and temporal subdivision schemes can be used to store camera navigation information that is data mined and clustered to be interactively applicable to a number of existing techniques. Second, we show how the movement of a traditional linear perspective camera is closely tied to non-linear projections that combine space and time. As a result, we present a coherent system where the navigation of a conventional camera is data mined to provide both the understandability of linear perspective and the flexibility of non-linear projection of a 3D scene in real-time. Our system's generality is illustrated by three visualization techniques built with a single data mining and projection infrastructure.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121969239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rachid Namane, Fatima Oulebsir-Boumghar, K. Bouatouch
The great advances in the field of 3D scanning technologies have enabled the creation of meshes with hundred millions of polygons. Rendering data sets of that size is time consuming even with commodity graphics hardware. The QSplat technique that has been introduced by S. Rusinkiewics and M. Levoy of Stanford University is used for the inter-active point based visualization of large 3D scenes. Nevertheless, it has some drawbacks like the storage requirement which is still higher. The objective of our work we present in this paper is to improve the per-node storage requirements of QSplat models and to minimize the transmission cost in streaming QSplat models across low-bandwidth networks or bottlenecked networks. To do that, we focus on coding strategies which provide reasonable data reduction at low decoding complexity. In this context, Huffman and relative delta encoding fit well with our purposes. The performances of the compression process are studied and the rendering algorithm is extended in order to be able to work on compressed data without loosing the original system interactivity.
{"title":"QSplat compression","authors":"Rachid Namane, Fatima Oulebsir-Boumghar, K. Bouatouch","doi":"10.1145/1029949.1029952","DOIUrl":"https://doi.org/10.1145/1029949.1029952","url":null,"abstract":"The great advances in the field of 3D scanning technologies have enabled the creation of meshes with hundred millions of polygons. Rendering data sets of that size is time consuming even with commodity graphics hardware. The QSplat technique that has been introduced by S. Rusinkiewics and M. Levoy of Stanford University is used for the inter-active point based visualization of large 3D scenes. Nevertheless, it has some drawbacks like the storage requirement which is still higher. The objective of our work we present in this paper is to improve the per-node storage requirements of QSplat models and to minimize the transmission cost in streaming QSplat models across low-bandwidth networks or bottlenecked networks. To do that, we focus on coding strategies which provide reasonable data reduction at low decoding complexity. In this context, Huffman and relative delta encoding fit well with our purposes. The performances of the compression process are studied and the rendering algorithm is extended in order to be able to work on compressed data without loosing the original system interactivity.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125476470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ancient Egyptian temple of Kalabsha dates back to 30 BC. In 1963 the temple was dismantled and moved to a new site in order to save it from the rising waters of the Lake Nasser. Computer graphics in collaboration with Egyptologists makes it possible to recreate the temple on a computer, place it back to its original location and orientation, and illuminate it, as it may have appeared some 2000 years ago. Accuracy is of the highest importance in such archaeological reconstructions when investigating how a site might have appeared in the past. Failure to use the highest fidelity means there is a very real danger of misrepresenting the past. This paper describes the practical methodology that should be undertaken in order to create a high fidelity reconstruction and realistic lighting simulation of an ancient Egyptian temple.
{"title":"High fidelity reconstruction of the ancient Egyptian temple of Kalabsha","authors":"V. Sundstedt, A. Chalmers, Philippe Martinez","doi":"10.1145/1029949.1029970","DOIUrl":"https://doi.org/10.1145/1029949.1029970","url":null,"abstract":"The ancient Egyptian temple of Kalabsha dates back to 30 BC. In 1963 the temple was dismantled and moved to a new site in order to save it from the rising waters of the Lake Nasser. Computer graphics in collaboration with Egyptologists makes it possible to recreate the temple on a computer, place it back to its original location and orientation, and illuminate it, as it may have appeared some 2000 years ago. Accuracy is of the highest importance in such archaeological reconstructions when investigating how a site might have appeared in the past. Failure to use the highest fidelity means there is a very real danger of misrepresenting the past.\u0000 This paper describes the practical methodology that should be undertaken in order to create a high fidelity reconstruction and realistic lighting simulation of an ancient Egyptian temple.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"80 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134530138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes the process of designing an authoring tool for virtual environments, using constructivist principles. The focus of the tool is on helping novice designers without coding experience to conceptualise and visualise the interactions of the virtual environment. According to constructivism, knowledge is constructed by people through interactions with their social and physical environments. Major aspects of this theory are explored, such as multiple representations, reflexivity, exploration, scaffolding and user control. Its practical application to the design of the tool is then described.
{"title":"Designing a VR interaction authoring tool using constructivist practices","authors":"C. Winterbottom, E. Blake","doi":"10.1145/1029949.1029961","DOIUrl":"https://doi.org/10.1145/1029949.1029961","url":null,"abstract":"This paper describes the process of designing an authoring tool for virtual environments, using constructivist principles. The focus of the tool is on helping novice designers without coding experience to conceptualise and visualise the interactions of the virtual environment. According to constructivism, knowledge is constructed by people through interactions with their social and physical environments. Major aspects of this theory are explored, such as multiple representations, reflexivity, exploration, scaffolding and user control. Its practical application to the design of the tool is then described.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131808334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Even though the speed of software ray tracing has recently been increased to interactive performance even on standard PCs, these systems usually only supported triangles as geometric primitives. Directly handling free-form surfaces such as spline or subdivision surfaces instead of first tesselating them offers many advantages such as higher precision results, reduced memory requirements, and faster preprocessing due to less primitives. However, existing algorithms for ray tracing free-form surfaces are much too slow for interactive use. In this paper we present a simple and generic approach for ray tracing free-form surfaces together with specific implementations for cubic Bézier and Loop subdivision surfaces. We show that our approach allows to increase the performance by more than an order of magnitude, requires only constant memory, and is largely independent on the total number of free-form primitives in a scene. Examples demonstrate that even scene with over one hundred thousand free-form surfaces can be rendered interactively on a single processor at video resolution.
{"title":"Interactive ray tracing of free-form surfaces","authors":"Carsten Benthin, I. Wald, P. Slusallek","doi":"10.1145/1029949.1029968","DOIUrl":"https://doi.org/10.1145/1029949.1029968","url":null,"abstract":"Even though the speed of software ray tracing has recently been increased to interactive performance even on standard PCs, these systems usually only supported triangles as geometric primitives. Directly handling free-form surfaces such as spline or subdivision surfaces instead of first tesselating them offers many advantages such as higher precision results, reduced memory requirements, and faster preprocessing due to less primitives. However, existing algorithms for ray tracing free-form surfaces are much too slow for interactive use.\u0000 In this paper we present a simple and generic approach for ray tracing free-form surfaces together with specific implementations for cubic Bézier and Loop subdivision surfaces. We show that our approach allows to increase the performance by more than an order of magnitude, requires only constant memory, and is largely independent on the total number of free-form primitives in a scene. Examples demonstrate that even scene with over one hundred thousand free-form surfaces can be rendered interactively on a single processor at video resolution.","PeriodicalId":325699,"journal":{"name":"International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in Africa","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127883323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}