Pub Date : 2004-06-08DOI: 10.2312/EGVE/EGVE04/091-098
Jing Chen, D. Bowman, John F. Lucas, C. A. Wingrave
Three-dimensional objects in many applications domains, such as architecture and construction, can be extremely complex and can consist of a large number of components. However, many of these complex objects also contain a great deal of repetition. Therefore, cloning techniques, which generate multiple spatially distributed copies of an object to form a repeated pattern, can be used to model these objects more efficiently. Such techniques are important and useful in desktop three-dimensional modeling systems, but we are not aware of any cloning techniques designed for immersive virtual environments (VEs). In this paper, we present an initial effort toward the design and development of such interfaces. We define the design space of the cloning task, and present five novel VE interfaces for cloning, then articulate the design rationale. We have also performed a usability study intended to elicit subjective responses with regard to affordances, feedback, attention, perceived usefulness, ease of use, and ease of learning in these interfaces. The study resulted in four major conclusions. First, slider widgets are better suited for discrete than for continuous numeric input. Second, the attentional requirements of the interface increase with increased degrees-of-freedom associated with widgets. Third, users prefer constrained widget movement, although more degrees-of-freedom allow more efficient parameter setting. Finally, appropriate feedback can reduce the cognitive load. The lessons we learned will influence our continuing design of cloning techniques, and these techniques will ultimately be applied to VE applications for design, construction, and prototyping.
{"title":"Interfaces for Cloning in Immersive Virtual Environments","authors":"Jing Chen, D. Bowman, John F. Lucas, C. A. Wingrave","doi":"10.2312/EGVE/EGVE04/091-098","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/091-098","url":null,"abstract":"Three-dimensional objects in many applications domains, such as architecture and construction, can be extremely complex and can consist of a large number of components. However, many of these complex objects also contain a great deal of repetition. Therefore, cloning techniques, which generate multiple spatially distributed copies of an object to form a repeated pattern, can be used to model these objects more efficiently. Such techniques are important and useful in desktop three-dimensional modeling systems, but we are not aware of any cloning techniques designed for immersive virtual environments (VEs). In this paper, we present an initial effort toward the design and development of such interfaces. We define the design space of the cloning task, and present five novel VE interfaces for cloning, then articulate the design rationale. We have also performed a usability study intended to elicit subjective responses with regard to affordances, feedback, attention, perceived usefulness, ease of use, and ease of learning in these interfaces. The study resulted in four major conclusions. First, slider widgets are better suited for discrete than for continuous numeric input. Second, the attentional requirements of the interface increase with increased degrees-of-freedom associated with widgets. Third, users prefer constrained widget movement, although more degrees-of-freedom allow more efficient parameter setting. Finally, appropriate feedback can reduce the cognitive load. The lessons we learned will influence our continuing design of cloning techniques, and these techniques will ultimately be applied to VE applications for design, construction, and prototyping.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128104063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-06-08DOI: 10.2312/EGVE/EGVE04/061-070
P. Boulanger, Daniel Torres, W. Bischof
This paper describes a reconfigurable VR environment and a markup language for creating experiments aimed at understanding human spatial navigation. It permits the creation of high-quality virtual environments and the recording of behavioral and brain activity measures while observers navigate these environments. The system is used in studies where the electroencephalographic activity is recorded while observers navigate virtual environments. The results of the study reported here confirmed previous finding that theta oscillations (electroencephalographic activity in the 4-8 Hz band) are linked to the difficulty of spatial navigation. Further, it showed that this activity is likely to occur at points where new rooms come into view, or after navigational mistakes have been realized and are being corrected. This indicates that theta oscillations in humans are related to the encoding and retrieval of spatial information.
{"title":"MANDALA: A Reconfigurable VR Environment for Studying Spatial Navigation in Humans Using EEG","authors":"P. Boulanger, Daniel Torres, W. Bischof","doi":"10.2312/EGVE/EGVE04/061-070","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/061-070","url":null,"abstract":"This paper describes a reconfigurable VR environment and a markup language for creating experiments aimed at understanding human spatial navigation. It permits the creation of high-quality virtual environments and the recording of behavioral and brain activity measures while observers navigate these environments. The system is used in studies where the electroencephalographic activity is recorded while observers navigate virtual environments. The results of the study reported here confirmed previous finding that theta oscillations (electroencephalographic activity in the 4-8 Hz band) are linked to the difficulty of spatial navigation. Further, it showed that this activity is likely to occur at points where new rooms come into view, or after navigational mistakes have been realized and are being corrected. This indicates that theta oscillations in humans are related to the encoding and retrieval of spatial information.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131465312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-06-08DOI: 10.2312/EGVE/EGVE04/099-104
F. Martinot, P. Plénacoste, C. Chaillou
For a variety of reasons, only a few computer devices allow to achieve pointing, tracking and selecting tasks in a precise, fast and intuitive way in 3D workspaces. This article presents the ergonomic and technical principles that have conditioned the proposal of a desktop input device called "DigiTracker". The user controls the position of a virtual object by grasping an isotonic end-effector between the thumb and the forefinger while his forearm is laying on the desk. This equivalent to an absolute three degrees of freedom mouse is especially suitable for closed virtual workspaces. The low technological cost of this solution could provide a really worth alternative to complex VR tracking systems. Possible applications are remote positioning tasks or CAD in simultaneous use with a device dedicated to rotations control.
{"title":"The DigiTracker, a Three Degrees of Freedom Pointing Device","authors":"F. Martinot, P. Plénacoste, C. Chaillou","doi":"10.2312/EGVE/EGVE04/099-104","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/099-104","url":null,"abstract":"For a variety of reasons, only a few computer devices allow to achieve pointing, tracking and selecting tasks in a precise, fast and intuitive way in 3D workspaces. This article presents the ergonomic and technical principles that have conditioned the proposal of a desktop input device called \"DigiTracker\". The user controls the position of a virtual object by grasping an isotonic end-effector between the thumb and the forefinger while his forearm is laying on the desk. This equivalent to an absolute three degrees of freedom mouse is especially suitable for closed virtual workspaces. The low technological cost of this solution could provide a really worth alternative to complex VR tracking systems. Possible applications are remote positioning tasks or CAD in simultaneous use with a device dedicated to rotations control.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133448613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-06-08DOI: 10.2312/EGVE/EGVE04/123-128
Stefan Conrad, H. Krüger, Matthias Haringer
This paper describes a solution for the modification of virtual environment (VE) applications while being immersed in the application scenario inside of an immersive projection environment. We propose an infrastructure which enables developers to adjust object properties and change the structure of the scene graph and data flow between nodes using a tablet PC. The interface consists of a two dimensional graphical user interface (2D GUI) brought on a spacial aware touch screen computer, accompanied by a mixer console with motor faders. We discuss the usability of the combination of different interaction modalities for the task of tuning of VE applications.
{"title":"Live Tuning of Virtual Environments: The VR-Tuner","authors":"Stefan Conrad, H. Krüger, Matthias Haringer","doi":"10.2312/EGVE/EGVE04/123-128","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/123-128","url":null,"abstract":"This paper describes a solution for the modification of virtual environment (VE) applications while being immersed in the application scenario inside of an immersive projection environment. We propose an infrastructure which enables developers to adjust object properties and change the structure of the scene graph and data flow between nodes using a tablet PC. The interface consists of a two dimensional graphical user interface (2D GUI) brought on a spacial aware touch screen computer, accompanied by a mixer console with motor faders. We discuss the usability of the combination of different interaction modalities for the task of tuning of VE applications.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114633937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-06-08DOI: 10.2312/EGVE/EGVE04/087-090
J. Zauner, Michael Haller
Creative and innovative people have good ideas for new kinds of Mixed Reality applications. Applications designed by artists for example enrich the exhibitions of modern museums. Developing such an MR application is a complex task, which nowadays is realized by software engineers. We have developed an authoring tool, which integrates a user-friendly and intuitive calibration tool for developing MR applications.
{"title":"Authoring of Mixed Reality Applications including Multi-Marker Calibration for Mobile Devices","authors":"J. Zauner, Michael Haller","doi":"10.2312/EGVE/EGVE04/087-090","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/087-090","url":null,"abstract":"Creative and innovative people have good ideas for new kinds of Mixed Reality applications. Applications designed by artists for example enrich the exhibitions of modern museums. Developing such an MR application is a complex task, which nowadays is realized by software engineers. We have developed an authoring tool, which integrates a user-friendly and intuitive calibration tool for developing MR applications.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132498436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-06-08DOI: 10.2312/EGVE/EGVE04/035-044
A. V. Rhijn, J. D. Mulder
In this paper, a new pattern based optical tracking method is presented for the recognition and pose estimation of input devices for virtual or augmented reality environments. The method is based on pencils of linefiducials, which reduces occlusion problems and allows for single camera pattern recognition and orientation estimation. Pattern recognition is accomplished using a projective invariant property of line pencils: the cross ratio. Orientation is derived from single camera line-plane correspondences, and position estimation is achieved using multiple cameras. The method is evaluated against a related point based tracking approach. Results show our method has lower latency and comparable accuracy, and is less sensitive to occlusion.
{"title":"Optical Tracking using Line Pencil Fiducials","authors":"A. V. Rhijn, J. D. Mulder","doi":"10.2312/EGVE/EGVE04/035-044","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/035-044","url":null,"abstract":"In this paper, a new pattern based optical tracking method is presented for the recognition and pose estimation of input devices for virtual or augmented reality environments. The method is based on pencils of linefiducials, which reduces occlusion problems and allows for single camera pattern recognition and orientation estimation. Pattern recognition is accomplished using a projective invariant property of line pencils: the cross ratio. Orientation is derived from single camera line-plane correspondences, and position estimation is achieved using multiple cameras. The method is evaluated against a related point based tracking approach. Results show our method has lower latency and comparable accuracy, and is less sensitive to occlusion.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124683293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-06-08DOI: 10.2312/EGVE/EGVE04/137-146
P. Boulanger, M. Benitez, Winston Wong
The main idea behind tele-immersive environment is to create an immersive virtual environment that connect people across networks and enable them to interact not only with each other, but also with various other forms of shared digital data (video, 3D models, images, text, etc.). Tele-immersive environments may eventually replace current video and telephone conferencing, and enable for a better and more intuitive way to communicate between people and computer systems. To accomplish this, participants to a meeting has to be represented digitally with a high degree of accuracy in order to keep a sense of immersion. Tele-immersive environments should have the same "feel" as a real meeting. Interactions among people should be natural. In other to create such a system, we need to solve the key problem of how to create in real-time new views from a fixed network of cameras that will correspond to new viewpoints. We also need to do this for two virtual cameras corresponding to the inter-ocular distance of each participant. In this paper, we will describe a new binocular view interpolation method based on a re-projection technique using calibrated cameras. We will discuss the various aspects of this new algorithm and of the hardware systems necessary to perform these operations in real-time. We will also present early experimental results illustrating the various advantages of this algorithm.
{"title":"A Tele-immersive System Based On Binocular View Interpolation","authors":"P. Boulanger, M. Benitez, Winston Wong","doi":"10.2312/EGVE/EGVE04/137-146","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/137-146","url":null,"abstract":"The main idea behind tele-immersive environment is to create an immersive virtual environment that connect people across networks and enable them to interact not only with each other, but also with various other forms of shared digital data (video, 3D models, images, text, etc.). Tele-immersive environments may eventually replace current video and telephone conferencing, and enable for a better and more intuitive way to communicate between people and computer systems. To accomplish this, participants to a meeting has to be represented digitally with a high degree of accuracy in order to keep a sense of immersion. Tele-immersive environments should have the same \"feel\" as a real meeting. Interactions among people should be natural. In other to create such a system, we need to solve the key problem of how to create in real-time new views from a fixed network of cameras that will correspond to new viewpoints. We also need to do this for two virtual cameras corresponding to the inter-ocular distance of each participant. In this paper, we will describe a new binocular view interpolation method based on a re-projection technique using calibrated cameras. We will discuss the various aspects of this new algorithm and of the hardware systems necessary to perform these operations in real-time. We will also present early experimental results illustrating the various advantages of this algorithm.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121775796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-30DOI: 10.2312/EGVE/EGVE02/189-196
M. Hachet, P. Guitton
Large display systems such as Reality Centers or Powerwalls, allow several users to be immersed in a virtual environment while being located in the same physical space. The characteristics of such systems induce new problems and new constraints as far as it concerns the interaction. According to the lack of input devices well adapted to large displays, we are developing a new interactor: The Interaction Table. This device, composed of a movable tray fixed on a pillar, offers 6 DOFs and uses both isotonic and isometric information. The table top offers a 2D plane on which the position of a pen can be recovered. Many 2D and 3D interaction techniques can be used to accomplish the different interaction tasks (navigation, manipulation, selection, system control) dealing with different space ranges. The design of the Interaction Table makes it accurate and easy to use without any effort. Its auto-supported aspect makes it a non constraining tool, which can be shared by all co-located users. We illustrate the utility of the Interaction Table through a real application of 3D geomarketing.
{"title":"The Interaction Table - a New Input Device Designed for Interaction in Immersive Large Display Environments","authors":"M. Hachet, P. Guitton","doi":"10.2312/EGVE/EGVE02/189-196","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE02/189-196","url":null,"abstract":"Large display systems such as Reality Centers or Powerwalls, allow several users to be immersed in a virtual environment while being located in the same physical space. The characteristics of such systems induce new problems and new constraints as far as it concerns the interaction. According to the lack of input devices well adapted to large displays, we are developing a new interactor: The Interaction Table. This device, composed of a movable tray fixed on a pillar, offers 6 DOFs and uses both isotonic and isometric information. The table top offers a 2D plane on which the position of a pen can be recovered. Many 2D and 3D interaction techniques can be used to accomplish the different interaction tasks (navigation, manipulation, selection, system control) dealing with different space ranges. The design of the Interaction Table makes it accurate and easy to use without any effort. Its auto-supported aspect makes it a non constraining tool, which can be shared by all co-located users. We illustrate the utility of the Interaction Table through a real application of 3D geomarketing.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116607362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-30DOI: 10.2312/EGVE/EGVE02/169-177
S. Kshirsagar, N. Magnenat-Thalmann, Anthony Guye-Vuillème, D. Thalmann, Kaveh Richard Kamyab Tehrani, E. Mamdani
Synchronization of speech, facial expressions and body gestures is one of the most critical problems in realistic avatar animation in virtual environments. In this paper, we address this problem by proposing a new high-level animation language to describe avatar animation. The Avatar Markup Language (AML), based on XML, encapsulates the Text to Speech, Facial Animation and Body Animation in a unified manner with appropriate synchronization. We use low-level animation parameters, defined by the MPEG-4 standard, to demonstrate the use of the AML. However, the AML itself is independent of any low-level parameters as such. AML can be effectively used by intelligent software agents to control their 3D graphical representations in the virtual environments. With the help of the associated tools, AML also facilitates to create and share 3D avatar animations quickly and easily. We also discuss how the language has been developed and used within the SoNG project framework. The tools developed to use AML in a real-time animation system incorporating intelligent agents and 3D avatars are also discussed subsequently.
{"title":"Avatar Markup Language","authors":"S. Kshirsagar, N. Magnenat-Thalmann, Anthony Guye-Vuillème, D. Thalmann, Kaveh Richard Kamyab Tehrani, E. Mamdani","doi":"10.2312/EGVE/EGVE02/169-177","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE02/169-177","url":null,"abstract":"Synchronization of speech, facial expressions and body gestures is one of the most critical problems in realistic avatar animation in virtual environments. In this paper, we address this problem by proposing a new high-level animation language to describe avatar animation. The Avatar Markup Language (AML), based on XML, encapsulates the Text to Speech, Facial Animation and Body Animation in a unified manner with appropriate synchronization. We use low-level animation parameters, defined by the MPEG-4 standard, to demonstrate the use of the AML. However, the AML itself is independent of any low-level parameters as such. AML can be effectively used by intelligent software agents to control their 3D graphical representations in the virtual environments. With the help of the associated tools, AML also facilitates to create and share 3D avatar animations quickly and easily. We also discuss how the language has been developed and used within the SoNG project framework. The tools developed to use AML in a real-time animation system incorporating intelligent agents and 3D avatars are also discussed subsequently.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125646011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-30DOI: 10.2312/EGVE/EGVE02/131-138
Dimitris Grammenos, M. Filou, P. Papadakos, C. Stephanidis
In this paper the concept of Virtual Prints (ViPs) is introduced and alternative ways in which they can be used are suggested. The design and required functionality of a software mechanism for creating and interacting with ViPs in Virtual Environments are presented along with techniques and methods for overcoming related issues. Finally, the findings of an explorative study of the concept and pilot implementation are discussed.
{"title":"Virtual Prints: Leaving trails in Virtual Environments","authors":"Dimitris Grammenos, M. Filou, P. Papadakos, C. Stephanidis","doi":"10.2312/EGVE/EGVE02/131-138","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE02/131-138","url":null,"abstract":"In this paper the concept of Virtual Prints (ViPs) is introduced and alternative ways in which they can be used are suggested. The design and required functionality of a software mechanism for creating and interacting with ViPs in Virtual Environments are presented along with techniques and methods for overcoming related issues. Finally, the findings of an explorative study of the concept and pilot implementation are discussed.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121687753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}