Pub Date : 2002-05-30DOI: 10.2312/EGVE/EGVE02/007-012
R. Hubbold
This paper describes a simulation of a collaborative task in a shared virtual environment --- two users carrying a shared object (a stretcher) in a complex chemical plant. The implementation includes a haptic interface for each user, so that forces transmitted through the stretcher from one user to the other can be experienced. Preliminary experiments show that the addition of haptic feedback significantly enhances the sense of sharing and each user's perception of the actions of the other user. The implementation is described, and some conclusions about the value of haptics, and plans for future work are given.
{"title":"Collaborative stretcher carrying: a case study","authors":"R. Hubbold","doi":"10.2312/EGVE/EGVE02/007-012","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE02/007-012","url":null,"abstract":"This paper describes a simulation of a collaborative task in a shared virtual environment --- two users carrying a shared object (a stretcher) in a complex chemical plant. The implementation includes a haptic interface for each user, so that forces transmitted through the stretcher from one user to the other can be experienced. Preliminary experiments show that the addition of haptic feedback significantly enhances the sense of sharing and each user's perception of the actions of the other user. The implementation is described, and some conclusions about the value of haptics, and plans for future work are given.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134590105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-30DOI: 10.2312/EGVE/EGVE02/123-130
Sharif Razzaque, David Swapp, M. Slater, M. Whitton, A. Steed
This paper describes a method for allowing people to virtually move around a CAVE™ without ever having to turn to face the missing back wall. We describe the method, and report a pilot study of 28 participants, half of whom moved through the virtual world using a hand-held controller, and the other half used the new technique called 'Redirected Walking in Place' (RWP). The results show that the current instantiation of the RWP technique does not result in a lower frequency of looking towards the missing wall. However, the results also show that the sense of presence in the virtual environment is significantly and negatively correlated with the amount that the back wall is seen. There is evidence that RWP does reduce the chance of seeing the blank wall for some participants. The increased sense of presence through never having to face the blank wall, and the results of this pilot study show the RWP has promise and merits further development.
{"title":"Redirected Walking in Place","authors":"Sharif Razzaque, David Swapp, M. Slater, M. Whitton, A. Steed","doi":"10.2312/EGVE/EGVE02/123-130","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE02/123-130","url":null,"abstract":"This paper describes a method for allowing people to virtually move around a CAVE™ without ever having to turn to face the missing back wall. We describe the method, and report a pilot study of 28 participants, half of whom moved through the virtual world using a hand-held controller, and the other half used the new technique called 'Redirected Walking in Place' (RWP). The results show that the current instantiation of the RWP technique does not result in a lower frequency of looking towards the missing wall. However, the results also show that the sense of presence in the virtual environment is significantly and negatively correlated with the amount that the back wall is seen. There is evidence that RWP does reduce the chance of seeing the blank wall for some participants. The increased sense of presence through never having to face the blank wall, and the results of this pilot study show the RWP has promise and merits further development.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121398299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-30DOI: 10.2312/EGVE/EGVE02/013-018
Olaf Körner, R. Männer
A simulation system for flexible endoscopy is described, based on virtual reality techniques. The physician moves the flexible endoscope inside a pipe, in which forces are applied to it. In addition the navigation wheels provide force feedback from the bending of the endoscope's tip. The paper focuses on the special purpose haptic display which actively generates forces to model the complex interaction of physician, endoscope and patient with high accuracy. Moreover fast algorithms for the force simulation in real-time are presented.
{"title":"Haptic Display for a Virtual Reality Simulator for Flexible Endoscopy","authors":"Olaf Körner, R. Männer","doi":"10.2312/EGVE/EGVE02/013-018","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE02/013-018","url":null,"abstract":"A simulation system for flexible endoscopy is described, based on virtual reality techniques. The physician moves the flexible endoscope inside a pipe, in which forces are applied to it. In addition the navigation wheels provide force feedback from the bending of the endoscope's tip. The paper focuses on the special purpose haptic display which actively generates forces to model the complex interaction of physician, endoscope and patient with high accuracy. Moreover fast algorithms for the force simulation in real-time are presented.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132967561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-30DOI: 10.2312/EGVE/EGVE02/139-148
S. Yoshida, Kunio Yamada, K. Mochizuki, K. Aizawa, T. Saito
Multimedia Ambiance Communication is a means to achieve shared-space communication in an immersive environment constructed of photo-realistic natural images where users can feel they are part of the environment. An image-based virtual environment is generally represented as an extensive field, in scenes showing mainly a landscape, and most objects are beyond the viewer's reach. Additionally, it usually has a single suitable point for observation because of limitations in the capture and representation methods of 3D-image spaces. Therefore, a special technique has to be developed that enables interaction with the environment. This paper describes the concept of a technique to interact with the scene based on a telescope-like virtual tool. The tool enables the user to stereoscopically view a distant object that will appear to be within reach, and to manipulate the object directly by putting a hand in the "scope". Hence, the user can handle objects at any distance, seamlessly and from the best viewpoint, without leaving an immersive environment.
{"title":"Scope-Based Interaction - A Technique for Interaction in an Image-Based Virtual Environment","authors":"S. Yoshida, Kunio Yamada, K. Mochizuki, K. Aizawa, T. Saito","doi":"10.2312/EGVE/EGVE02/139-148","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE02/139-148","url":null,"abstract":"Multimedia Ambiance Communication is a means to achieve shared-space communication in an immersive environment constructed of photo-realistic natural images where users can feel they are part of the environment. An image-based virtual environment is generally represented as an extensive field, in scenes showing mainly a landscape, and most objects are beyond the viewer's reach. Additionally, it usually has a single suitable point for observation because of limitations in the capture and representation methods of 3D-image spaces. Therefore, a special technique has to be developed that enables interaction with the environment. This paper describes the concept of a technique to interact with the scene based on a telescope-like virtual tool. The tool enables the user to stereoscopically view a distant object that will appear to be within reach, and to manipulate the object directly by putting a hand in the \"scope\". Hence, the user can handle objects at any distance, seamlessly and from the best viewpoint, without leaving an immersive environment.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126218734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-30DOI: 10.2312/EGVE/EGVE02/149-157
Noritaka Osawa, K. Asai, F. Saito
An interactive toolkit library for developing 3D applications called "it3d" is described that utilize artificial reality (AR) technologies. It was implemented by using the Java language and the Java 3D class library to enhance its portability. It3d makes it easy to construct AR applications that are portable and adaptable. It3d consists of three sub-libraries: an input/output library for distributed devices, a 3D widget library for multimodal interfacing, and an interaction-recognition library. The input/output library for distributed devices has a uniform programming interface style for various types of devices. The interfaces are defined by using OMG IDL. The library utilizes multicast peer-to-peer communication to enable efficient device discovery and exchange of events and data. Multicast-capable CORBA functions have been developed and used. The 3D widget library for the multimodal interface has useful 3D widgets that support efficient and flexible customization based on prototype-based object orientation, or a delegation model. The attributes of a widget are used to customize it dynamically. The attributes constitute a hierarchical structure. The interaction-recognition library is used to recognize basic motions in a 3D space, such as pointing, selecting, pinching, grasping, and moving. The library is flexible, and the recognition conditions can be given as parameters. A new recognition engine can be developed by using a new circular event history buffer to efficiently manage and retrieve past events. Development of immersive AR applications using it3d demonstrated that less time is needed to develop the applications with it3d than without it. It3d makes it easy to construct AR applications that are portable and adaptable.
{"title":"An Interactive Toolkit Library for 3D Applications: it3d","authors":"Noritaka Osawa, K. Asai, F. Saito","doi":"10.2312/EGVE/EGVE02/149-157","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE02/149-157","url":null,"abstract":"An interactive toolkit library for developing 3D applications called \"it3d\" is described that utilize artificial reality (AR) technologies. It was implemented by using the Java language and the Java 3D class library to enhance its portability. It3d makes it easy to construct AR applications that are portable and adaptable. It3d consists of three sub-libraries: an input/output library for distributed devices, a 3D widget library for multimodal interfacing, and an interaction-recognition library. The input/output library for distributed devices has a uniform programming interface style for various types of devices. The interfaces are defined by using OMG IDL. The library utilizes multicast peer-to-peer communication to enable efficient device discovery and exchange of events and data. Multicast-capable CORBA functions have been developed and used. The 3D widget library for the multimodal interface has useful 3D widgets that support efficient and flexible customization based on prototype-based object orientation, or a delegation model. The attributes of a widget are used to customize it dynamically. The attributes constitute a hierarchical structure. The interaction-recognition library is used to recognize basic motions in a 3D space, such as pointing, selecting, pinching, grasping, and moving. The library is flexible, and the recognition conditions can be given as parameters. A new recognition engine can be developed by using a new circular event history buffer to efficiently manage and retrieve past events. Development of immersive AR applications using it3d demonstrated that less time is needed to develop the applications with it3d than without it. It3d makes it easy to construct AR applications that are portable and adaptable.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123947682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-30DOI: 10.2312/EGVE/EGVE02/047-052
I. Nikitin, L. Nikitina, Pavel Frolov, G. Goebbels, M. Göbel
Simulation of an object's elastic deformation is an important feature in applications where three-dimensional object behavior is explored. In addition, the benefits of user-object interactions are best realized in interactive environments which require the rapid computation of deformations. In this paper we present a prototype of a system for the simulation of elastic objects in Virtual Environments (VE) under real-time conditions. The approach makes use of the method of finite elements and precomputed Green's functions. The simulation is interactively visualized in fully immersive rear-projection based Virtual Environments such as the CyberStage and semi-immersive ones such as the Responsive Workbench. Using pick-ray interaction techniques the user can interactively apply forces to the object causing its deformation. Our interactive visualization module, embedded in VE system Avango, supports real time deformations of high-resolution 3D model (10,000 nodes) at a speed >20 stereoimages/sec.
{"title":"Real-time simulation of elastic objects in Virtual Environments using finite element method and precomputed Green s functions","authors":"I. Nikitin, L. Nikitina, Pavel Frolov, G. Goebbels, M. Göbel","doi":"10.2312/EGVE/EGVE02/047-052","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE02/047-052","url":null,"abstract":"Simulation of an object's elastic deformation is an important feature in applications where three-dimensional object behavior is explored. In addition, the benefits of user-object interactions are best realized in interactive environments which require the rapid computation of deformations. In this paper we present a prototype of a system for the simulation of elastic objects in Virtual Environments (VE) under real-time conditions. The approach makes use of the method of finite elements and precomputed Green's functions. The simulation is interactively visualized in fully immersive rear-projection based Virtual Environments such as the CyberStage and semi-immersive ones such as the Responsive Workbench. Using pick-ray interaction techniques the user can interactively apply forces to the object causing its deformation. Our interactive visualization module, embedded in VE system Avango, supports real time deformations of high-resolution 3D model (10,000 nodes) at a speed >20 stereoimages/sec.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129901589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-30DOI: 10.2312/EGVE/EGVE02/037-046
Lining Yang, R. Crawfis
Complex renderings of synthetic scenes or virtual environments, once deemed impossible for consumer rendering, are becoming available as tools for young artists. These renderings, due to their high-quality image synthesis, can take minutes to hours to render. Nowadays, as the computing power has increased dramatically, the size and complexity of the datasets generated by the super-computer can be overwhelming. It is almost impossible for the visualization techniques to achieve interactive frame rates. Our work focuses on using Image-Based Rendering (IBR) techniques to manage and explore large and complex datasets and virtual scenes on a remote display across the world-wide-web. The key idea for this research is to pre-process the datasets and render key viewpoints on pre-selected paths inside the dataset. We present new techniques to reconstruct approximations to any view along the path, which allows the user to roam around inside the datasets with interactive frame rates. We have implemented the pipeline for generating the sampled key viewpoints and reconstructing panoramic-based IBR models. Our implementation includes an efficient two-phase caching and pre-fetching scheme. The system has been successfully tested on several datasets and satisfying results have been obtained. Analysis of errors is also presented.
{"title":"Rail-Track Viewer - An Image-Based Virtual Walkthrough System","authors":"Lining Yang, R. Crawfis","doi":"10.2312/EGVE/EGVE02/037-046","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE02/037-046","url":null,"abstract":"Complex renderings of synthetic scenes or virtual environments, once deemed impossible for consumer rendering, are becoming available as tools for young artists. These renderings, due to their high-quality image synthesis, can take minutes to hours to render. Nowadays, as the computing power has increased dramatically, the size and complexity of the datasets generated by the super-computer can be overwhelming. It is almost impossible for the visualization techniques to achieve interactive frame rates. Our work focuses on using Image-Based Rendering (IBR) techniques to manage and explore large and complex datasets and virtual scenes on a remote display across the world-wide-web. The key idea for this research is to pre-process the datasets and render key viewpoints on pre-selected paths inside the dataset. We present new techniques to reconstruct approximations to any view along the path, which allows the user to roam around inside the datasets with interactive frame rates. We have implemented the pipeline for generating the sampled key viewpoints and reconstructing panoramic-based IBR models. Our implementation includes an efficient two-phase caching and pre-fetching scheme. The system has been successfully tested on several datasets and satisfying results have been obtained. Analysis of errors is also presented.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126980550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-30DOI: 10.2312/EGVE/EGVE02/073-080
A. Kunz, Christian P. Spagno
Virtual reality makes it possible to realize distributed collaborative teamwork. In this case objects can be represented three-dimensionally in different visualization installations which are connected with each other over a network [7]. Up to now the user remains mostly without consideration. For distributed collaborative teamwork the user should be visualized three-dimensionally together with the other virtual objects [14]. In the presented paper a special projection installation is described which allows simultaneous projection and acquisition of images of the users.
{"title":"Technical System for Collaborative Work","authors":"A. Kunz, Christian P. Spagno","doi":"10.2312/EGVE/EGVE02/073-080","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE02/073-080","url":null,"abstract":"Virtual reality makes it possible to realize distributed collaborative teamwork. In this case objects can be represented three-dimensionally in different visualization installations which are connected with each other over a network [7]. Up to now the user remains mostly without consideration. For distributed collaborative teamwork the user should be visualized three-dimensionally together with the other virtual objects [14]. In the presented paper a special projection installation is described which allows simultaneous projection and acquisition of images of the users.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134228352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-30DOI: 10.2312/EGVE/EGVE02/019-025
J. Gausemeier, Jürgen Fründ, C. Matysczok
The technology of augmented reality (AR), as a new user interface, introduces a completely new perspective for the design of technical manufacturing systems. This technique supports a face to face collaboration where users need to be able to easily cooperate with each other. As with typical construction sets like LEGO or Fischertechnik, the planning engineers model the future manufacturing system in their real environment. The components are taken from virtual construction sets and are positioned interactively in the manufacturing hall. Planning rules are used to assist the user and to prevents possible errors. This article describes the conception of a virtual construction set and the realization of its prototype. The description of the development of this construction set is supplemented by an illustration of the used hardware and software components.
{"title":"AR-Planning Tool - Designing Flexible Manufacturing Systems with Augmented Reality","authors":"J. Gausemeier, Jürgen Fründ, C. Matysczok","doi":"10.2312/EGVE/EGVE02/019-025","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE02/019-025","url":null,"abstract":"The technology of augmented reality (AR), as a new user interface, introduces a completely new perspective for the design of technical manufacturing systems. This technique supports a face to face collaboration where users need to be able to easily cooperate with each other. As with typical construction sets like LEGO or Fischertechnik, the planning engineers model the future manufacturing system in their real environment. The components are taken from virtual construction sets and are positioned interactively in the manufacturing hall. Planning rules are used to assist the user and to prevents possible errors. This article describes the conception of a virtual construction set and the realization of its prototype. The description of the development of this construction set is supplemented by an illustration of the used hardware and software components.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132181544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-30DOI: 10.2312/EGVE/EGVE02/113-122
U. Wössner, J. Schulze, S. Walz, U. Lang
In this paper, we present a collaborative volume rendering application which can be used in distributed virtual environments. The application allows the users to collaboratively view volumetric data and manipulate the transfer functions. Furthermore, 3D markers can be used to support communication. The collaborative setup includes a full duplex audio channel between the virtual environments. The developed software was evaluated with external users who were asked to solve tasks in two scenarios which resembled real-world situations from the medical field: a presentation and a time-constrained search task. For the evaluation, two 4-sided CAVE-like virtual environments were linked. The collaborative application was analyzed for both technical and social aspects.
{"title":"Evaluation of a Collaborative Volume Rendering Application in a Distributed Virtual Environment","authors":"U. Wössner, J. Schulze, S. Walz, U. Lang","doi":"10.2312/EGVE/EGVE02/113-122","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE02/113-122","url":null,"abstract":"In this paper, we present a collaborative volume rendering application which can be used in distributed virtual environments. The application allows the users to collaboratively view volumetric data and manipulate the transfer functions. Furthermore, 3D markers can be used to support communication. The collaborative setup includes a full duplex audio channel between the virtual environments. The developed software was evaluated with external users who were asked to solve tasks in two scenarios which resembled real-world situations from the medical field: a presentation and a time-constrained search task. For the evaluation, two 4-sided CAVE-like virtual environments were linked. The collaborative application was analyzed for both technical and social aspects.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131531748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}