Christian Pirchheim, Manuela Waldner, D. Schmalstieg
In this paper we present the multi-display environment Deskotheque, which combines personal and tiled projected displays into a continuous teamspace. Its main distinguishing factor is a fine-grained spatial (i. e., both geometric and topological) model of the display layout. Using this model, Deskotheque allows seamless mouse pointer navigation and application window sharing across the multi-display environment. Geometric compensation of casually aligned multi-projector displays supports a wide range of display configurations. Mouse pointer redirection and window migration are tightly integrated into the windowing system, while geometric compensation of projected imagery is accomplished by a 3D compositing window manager. Thus, Deskotheque provides sharing of unmodified desktop application windows across display and workstation boundaries without compromising hardware-accelerated rendering of 2D or 3D content on projected tiled displays with geometric compensation.
{"title":"Deskotheque: Improved Spatial Awareness in Multi-Display Environments","authors":"Christian Pirchheim, Manuela Waldner, D. Schmalstieg","doi":"10.1109/VR.2009.4811010","DOIUrl":"https://doi.org/10.1109/VR.2009.4811010","url":null,"abstract":"In this paper we present the multi-display environment Deskotheque, which combines personal and tiled projected displays into a continuous teamspace. Its main distinguishing factor is a fine-grained spatial (i. e., both geometric and topological) model of the display layout. Using this model, Deskotheque allows seamless mouse pointer navigation and application window sharing across the multi-display environment. Geometric compensation of casually aligned multi-projector displays supports a wide range of display configurations. Mouse pointer redirection and window migration are tightly integrated into the windowing system, while geometric compensation of projected imagery is accomplished by a 3D compositing window manager. Thus, Deskotheque provides sharing of unmodified desktop application windows across display and workstation boundaries without compromising hardware-accelerated rendering of 2D or 3D content on projected tiled displays with geometric compensation.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121593019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Schoor, S. Masik, Johannes Tümler, S. Adler, M. Hofmann, E. Trostmann
In this paper a new interactive digitization concept for large real world objects is described using the virtual reality (VR) environment Elbe Dom. The method combines augmented reality (AR) technologies with a high quality display of textured three dimensional (3D) models. One focus is thereby the display of valuable information of the actual measuring process. The proposed method is a non-contact technique, especially applicable to objects with freeform and a size up to 5 m × 5 m × 3 m. The method achieves an accuracy value of less than 1 mm within the whole measuring volume.
{"title":"A Concept for Applying VR and AR Technologies to Support Efficient 3D Non-contact Model Digitalization","authors":"W. Schoor, S. Masik, Johannes Tümler, S. Adler, M. Hofmann, E. Trostmann","doi":"10.1109/VR.2009.4811043","DOIUrl":"https://doi.org/10.1109/VR.2009.4811043","url":null,"abstract":"In this paper a new interactive digitization concept for large real world objects is described using the virtual reality (VR) environment Elbe Dom. The method combines augmented reality (AR) technologies with a high quality display of textured three dimensional (3D) models. One focus is thereby the display of valuable information of the actual measuring process. The proposed method is a non-contact technique, especially applicable to objects with freeform and a size up to 5 m × 5 m × 3 m. The method achieves an accuracy value of less than 1 mm within the whole measuring volume.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115215135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Liu, R. V. Liere, Catharina Nieuwenhuizen, J. Martens
The study of aimed movements has a long history, starting at least as far back as 1899 when Wood-worth proposed a two-component model in which aimed movements are broken into an initial ballistic phase and an additional control phase. In this paper, we use Wood-worth's model for experimentally comparing aimed movements in the real world with those in a virtual environment. Trajectories from real world movements have been collected and compared to trajectories of movements taken from a virtual environment. From this, we show that significant temporal differences arise in both the ballistic and control phases, but the difference is much larger in the control phase; users' improvement is relatively greater in the virtual world than in the real world. They progress more in ballistic phase in the real world, but more in correction phase in the virtual world. These results allow us to better understand the pointing tasks in virtual environments.
{"title":"Comparing Aimed Movements in the Real World and in Virtual Reality","authors":"Lei Liu, R. V. Liere, Catharina Nieuwenhuizen, J. Martens","doi":"10.1109/VR.2009.4811026","DOIUrl":"https://doi.org/10.1109/VR.2009.4811026","url":null,"abstract":"The study of aimed movements has a long history, starting at least as far back as 1899 when Wood-worth proposed a two-component model in which aimed movements are broken into an initial ballistic phase and an additional control phase. In this paper, we use Wood-worth's model for experimentally comparing aimed movements in the real world with those in a virtual environment. Trajectories from real world movements have been collected and compared to trajectories of movements taken from a virtual environment. From this, we show that significant temporal differences arise in both the ballistic and control phases, but the difference is much larger in the control phase; users' improvement is relatively greater in the virtual world than in the real world. They progress more in ballistic phase in the real world, but more in correction phase in the virtual world. These results allow us to better understand the pointing tasks in virtual environments.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122837337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frank Steinicke, G. Bruder, K. Hinrichs, A. Steed, A. Gerlach
In order to increase a user's sense of presence in an artificial environment some researchers propose a gradual transition from reality to the virtual world instead of immersing users into the virtual world directly. One approach is to start the VR experience in a virtual replica of the physical space to accustom users to the characteristics of VR, e.g., latency, reduced field of view or tracking errors, in a known environment. Although this procedure is already applied in VR demonstrations, until now it has not been verified whether the usage of such a transitional environment ¿ as transition between real and virtual environment ¿ increases someone's sense of presence. We have observed subjective, physiological and behavioral reactions of subjects during a fully-immersive flight phobia experiment under two different conditions: the virtual flight environment was displayed immediately, or subjects visited a transitional environment before entering the virtual flight environment. We have quantified to what extent a gradual transition to the VE via a transitional environment increases the level of presence. We have found that subjective responses show significantly higher scores for the user's sense of presence, and that subjects' behavioral reactions change when a transitional environment is shown first. Considering physiological reactions, no significant difference could be found.
{"title":"Does a Gradual Transition to the Virtual World increase Presence?","authors":"Frank Steinicke, G. Bruder, K. Hinrichs, A. Steed, A. Gerlach","doi":"10.1109/VR.2009.4811024","DOIUrl":"https://doi.org/10.1109/VR.2009.4811024","url":null,"abstract":"In order to increase a user's sense of presence in an artificial environment some researchers propose a gradual transition from reality to the virtual world instead of immersing users into the virtual world directly. One approach is to start the VR experience in a virtual replica of the physical space to accustom users to the characteristics of VR, e.g., latency, reduced field of view or tracking errors, in a known environment. Although this procedure is already applied in VR demonstrations, until now it has not been verified whether the usage of such a transitional environment ¿ as transition between real and virtual environment ¿ increases someone's sense of presence. We have observed subjective, physiological and behavioral reactions of subjects during a fully-immersive flight phobia experiment under two different conditions: the virtual flight environment was displayed immediately, or subjects visited a transitional environment before entering the virtual flight environment. We have quantified to what extent a gradual transition to the VE via a transitional environment increases the level of presence. We have found that subjective responses show significantly higher scores for the user's sense of presence, and that subjects' behavioral reactions change when a transitional environment is shown first. Considering physiological reactions, no significant difference could be found.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125893507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High resolution 2D video content in High Definition or higher resolutions has become widespread and video playback of such media in immersive virtual environments (VE) will be a valuable element adding more realism to VE applications. This kind of video play-back, however, has to overcome several problems. First, the data volume of video clips can reach up to hundreds of gigabytes or more depending on the length of the clips, and the data has to be streamed into virtual reality (VR) systems in real-time. Second, the interactivity of the playback screen in 3D virtual environments requires efficient rendering of each video frame. Interactivity means that the plane of the video playback screen needs to rotate, translate, and zoom in and out in 3D space as the viewer roams around in the VE. This also means that the video is not necessarily parallel to the display screen but will need to be displayed as a general quadrangle. In this work, we propose an efficient algorithm that utilizes mipmapped data, that is, multiple levels of resolutions, to provide an efficient way to interactively play back high resolution video content in VEs. In addition, we discuss several optimizations to sustain a constant frame rate, such as an optimized memory management mechanism, dynamic resolution adjustment, and predictive prefetching of data. Finally, we evaluate two video playback applications running on a virtual reality CAVE system: 1) high definition video at 3840 × 2160 pixels and 2) 32 independent 256 × 192 pixels video clips.
{"title":"High Resolution Video Playback in Immersive Virtual Environments","authors":"Han Suk Kim, J. Schulze","doi":"10.1109/VR.2009.4811038","DOIUrl":"https://doi.org/10.1109/VR.2009.4811038","url":null,"abstract":"High resolution 2D video content in High Definition or higher resolutions has become widespread and video playback of such media in immersive virtual environments (VE) will be a valuable element adding more realism to VE applications. This kind of video play-back, however, has to overcome several problems. First, the data volume of video clips can reach up to hundreds of gigabytes or more depending on the length of the clips, and the data has to be streamed into virtual reality (VR) systems in real-time. Second, the interactivity of the playback screen in 3D virtual environments requires efficient rendering of each video frame. Interactivity means that the plane of the video playback screen needs to rotate, translate, and zoom in and out in 3D space as the viewer roams around in the VE. This also means that the video is not necessarily parallel to the display screen but will need to be displayed as a general quadrangle. In this work, we propose an efficient algorithm that utilizes mipmapped data, that is, multiple levels of resolutions, to provide an efficient way to interactively play back high resolution video content in VEs. In addition, we discuss several optimizations to sustain a constant frame rate, such as an optimized memory management mechanism, dynamic resolution adjustment, and predictive prefetching of data. Finally, we evaluate two video playback applications running on a virtual reality CAVE system: 1) high definition video at 3840 × 2160 pixels and 2) 32 independent 256 × 192 pixels video clips.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127071830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes a global interactive scheme including fast motion planning and real time guiding force for 3D CAD part assembly or disassembly tasks. For real time purpose, the motion planner is divided into different steps. First, a preliminary workspace discretization is done without time limitations at the beginning of the simulation. Then, using those computed data, a second part tries to find a collision free path in real time. Once the path is found, an haptic artificial force is applied constraining the user on the path. The user can then influence the planner by not following the path and automatically order a new path research. The performance of this haptic assistance is measured on a test simulation based on an ALSTOM power components assembly simulation.
{"title":"Haptic Assembly and Disassembly Task Assistance using Interactive Path Planning","authors":"Nicolas Ladevèze, J. Fourquet, B. Puel, M. Taïx","doi":"10.1109/VR.2009.4810993","DOIUrl":"https://doi.org/10.1109/VR.2009.4810993","url":null,"abstract":"This paper describes a global interactive scheme including fast motion planning and real time guiding force for 3D CAD part assembly or disassembly tasks. For real time purpose, the motion planner is divided into different steps. First, a preliminary workspace discretization is done without time limitations at the beginning of the simulation. Then, using those computed data, a second part tries to find a collision free path in real time. Once the path is found, an haptic artificial force is applied constraining the user on the path. The user can then influence the planner by not following the path and automatically order a new path research. The performance of this haptic assistance is measured on a test simulation based on an ALSTOM power components assembly simulation.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132683667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Boyles, Jeff Rogers, Keith Goreham, Mary Ann Frank, Jan Cowan
The study of lighting in architectural and interior design education is diverse and difficult. It has been shown that static computer-generated imagery can adequately represent real-world environments for subjective lighting analysis as long as the software accurately reproduces certain light distributions. This paper describes a prototype environment that explores an alternative education tool for studying interior lighting environments through the use of global illumination simulations in a virtual environment. Modern virtual reality technology affords us the luxury of not only achieving a high quality visual experience but also allowing the student to navigate through a space and interactively adjust lighting parameters. We describe our experience creating such an environment as well as the subjective interpretation of student users.
{"title":"Virtual Simulation for Lighting & Design Education","authors":"M. Boyles, Jeff Rogers, Keith Goreham, Mary Ann Frank, Jan Cowan","doi":"10.1109/VR.2009.4811052","DOIUrl":"https://doi.org/10.1109/VR.2009.4811052","url":null,"abstract":"The study of lighting in architectural and interior design education is diverse and difficult. It has been shown that static computer-generated imagery can adequately represent real-world environments for subjective lighting analysis as long as the software accurately reproduces certain light distributions. This paper describes a prototype environment that explores an alternative education tool for studying interior lighting environments through the use of global illumination simulations in a virtual environment. Modern virtual reality technology affords us the luxury of not only achieving a high quality visual experience but also allowing the student to navigate through a space and interactively adjust lighting parameters. We describe our experience creating such an environment as well as the subjective interpretation of student users.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121470353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sabarish V. Babu, Timofey Grechkin, Benjamin Chihak, Christine J. Ziemer, J. Kearney, J. Cremer, J. Plumert
The goal of our work is to develop a programmatically controlled peer to ride with a human subject for the purpose of studying how social interactions influence riding behavior. The peer is controlled through a combination of reactive controllers that determine the gross motion of the virtual bicycle, action-based controllers that animate the virtual bicyclist and generate verbal behaviors, and a keyboard interface that allows an experimenter to initiate the virtual bicyclist's actions during the course of an experiment. The virtual bicyclist's repertoire of behaviors includes road following, riding alongside the human rider, stopping at intersections, and crossing intersections through specified gaps. The virtual cyclist engages the human subject through gaze, gesture, and verbal interactions. We describe the structure of the behavior code and report the results of a pilot study examining how 10- and 12-year-old children interact with a peer cyclist. Results of the pilot study showed that the presence of the peer had a significant influence on the size of the gaps taken as well as time left to spare between the participant and the trailing car in the crossed gap.
{"title":"A Virtual Peer for Investigating Social Influences on Children's Bicycling","authors":"Sabarish V. Babu, Timofey Grechkin, Benjamin Chihak, Christine J. Ziemer, J. Kearney, J. Cremer, J. Plumert","doi":"10.1109/VR.2009.4811004","DOIUrl":"https://doi.org/10.1109/VR.2009.4811004","url":null,"abstract":"The goal of our work is to develop a programmatically controlled peer to ride with a human subject for the purpose of studying how social interactions influence riding behavior. The peer is controlled through a combination of reactive controllers that determine the gross motion of the virtual bicycle, action-based controllers that animate the virtual bicyclist and generate verbal behaviors, and a keyboard interface that allows an experimenter to initiate the virtual bicyclist's actions during the course of an experiment. The virtual bicyclist's repertoire of behaviors includes road following, riding alongside the human rider, stopping at intersections, and crossing intersections through specified gaps. The virtual cyclist engages the human subject through gaze, gesture, and verbal interactions. We describe the structure of the behavior code and report the results of a pilot study examining how 10- and 12-year-old children interact with a peer cyclist. Results of the pilot study showed that the presence of the peer had a significant influence on the size of the gaps taken as well as time left to spare between the participant and the trailing car in the crossed gap.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122908550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Léo Terziman, A. Lécuyer, Sébastien Hillaire, J. Wiener
This paper reports one experiment conducted to evaluate the influence of oscillating camera motions on the perception of traveled distances in virtual environments. In the experiment, participants viewed visual projections of translations along straight paths. They were then asked to reproduce the traveled distance during a navigation phase using keyboard keys. Each participant had to complete the task (1) with linear camera motion, and (2) with oscillating camera motion that simulates the visual flow generated by natural human walking. Taken together, our preliminary results suggest that oscillating camera motions allow a more accurate distance reproduction for short traveled distances.
{"title":"Can Camera Motions Improve the Perception of Traveled Distance in Virtual Environments?","authors":"Léo Terziman, A. Lécuyer, Sébastien Hillaire, J. Wiener","doi":"10.1109/VR.2009.4811012","DOIUrl":"https://doi.org/10.1109/VR.2009.4811012","url":null,"abstract":"This paper reports one experiment conducted to evaluate the influence of oscillating camera motions on the perception of traveled distances in virtual environments. In the experiment, participants viewed visual projections of translations along straight paths. They were then asked to reproduce the traveled distance during a navigation phase using keyboard keys. Each participant had to complete the task (1) with linear camera motion, and (2) with oscillating camera motion that simulates the visual flow generated by natural human walking. Taken together, our preliminary results suggest that oscillating camera motions allow a more accurate distance reproduction for short traveled distances.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123730948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frank Gommlich, Guido Heumer, Arnd Vitzthum, B. Jung
Realistic behavior of control actuators is important for virtual proto-typing applications. We present a systematic approach for modeling such articulated components as described in the European Standard EN 894-3. Control actuators may have several rotational and translational degrees of freedom (DOFs), possibly with discrete lock states. During user interactions, information about the actuators' manipulation is collected and made available to the higher application layers in the form of interaction events. This allows for recording and playback of demonstrated manipulation sequences for many purposes, such as ergonomics evaluations involving virtual humans. The framework uses XML for declaration and is implemented using a freely available physics engine.
{"title":"Simulation of Standard Control Actuators in Dynamic Virtual Environments","authors":"Frank Gommlich, Guido Heumer, Arnd Vitzthum, B. Jung","doi":"10.1109/VR.2009.4811049","DOIUrl":"https://doi.org/10.1109/VR.2009.4811049","url":null,"abstract":"Realistic behavior of control actuators is important for virtual proto-typing applications. We present a systematic approach for modeling such articulated components as described in the European Standard EN 894-3. Control actuators may have several rotational and translational degrees of freedom (DOFs), possibly with discrete lock states. During user interactions, information about the actuators' manipulation is collected and made available to the higher application layers in the form of interaction events. This allows for recording and playback of demonstrated manipulation sequences for many purposes, such as ergonomics evaluations involving virtual humans. The framework uses XML for declaration and is implemented using a freely available physics engine.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126256717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}