Virtual reality (VR) provides a completely digital world of interaction which enables the users to modify, edit, and transform digital elements in a responsive way. Mixed reality (MR), which is the result of blending the digital world and the physical world together, brings new advancements and challenges to human, computer and environment interactions. This paper focuses on adapting the already-existing methods and tools in architecture to both VR and MR environments under sustainable architectural design domain. For this purpose, we benefit from the semantically enriched data platforms of Building information modelling (BIM) tools, the performance calculation functions of building energy simulation tools while transcending these data into VR and MR environments. In this way, we were able to merge these diverse data for the virtual design activity. Nine participants have already tested the initial prototype of MR-based only interaction environment in our previous study [1]. According to the feedbacks, the user interface and interaction mechanisms were updated and the environment was made accessible also in VR. These updates made four types of interactions possible in MR and VR: 1) MR environment using HoloLens with gestures, 2) MR environment using HoloLens with a clicker, 3) VR environment using HTC Vive with two controllers, and 4) HoloLens emulator with a mouse. All these interaction cases were tested by 21 architecture students in an in-house workshop. In this workshop, we collected data on presence, usability, and technology acceptance of these cases. Our results show that interaction in a VR environment is the most natural interaction type and the participants were eager to use both MR and VR environments instead of an emulator. To our best of knowledge, this is the first comparative study of a BIM-based architectural design medium in both VR and MR environments.
{"title":"Architectural Design in Virtual Reality and Mixed Reality Environments: A Comparative Analysis","authors":"Oguzcan Ergün, Şahin Akln, I. Dino, Elif Sürer","doi":"10.1109/VR.2019.8798180","DOIUrl":"https://doi.org/10.1109/VR.2019.8798180","url":null,"abstract":"Virtual reality (VR) provides a completely digital world of interaction which enables the users to modify, edit, and transform digital elements in a responsive way. Mixed reality (MR), which is the result of blending the digital world and the physical world together, brings new advancements and challenges to human, computer and environment interactions. This paper focuses on adapting the already-existing methods and tools in architecture to both VR and MR environments under sustainable architectural design domain. For this purpose, we benefit from the semantically enriched data platforms of Building information modelling (BIM) tools, the performance calculation functions of building energy simulation tools while transcending these data into VR and MR environments. In this way, we were able to merge these diverse data for the virtual design activity. Nine participants have already tested the initial prototype of MR-based only interaction environment in our previous study [1]. According to the feedbacks, the user interface and interaction mechanisms were updated and the environment was made accessible also in VR. These updates made four types of interactions possible in MR and VR: 1) MR environment using HoloLens with gestures, 2) MR environment using HoloLens with a clicker, 3) VR environment using HTC Vive with two controllers, and 4) HoloLens emulator with a mouse. All these interaction cases were tested by 21 architecture students in an in-house workshop. In this workshop, we collected data on presence, usability, and technology acceptance of these cases. Our results show that interaction in a VR environment is the most natural interaction type and the participants were eager to use both MR and VR environments instead of an emulator. To our best of knowledge, this is the first comparative study of a BIM-based architectural design medium in both VR and MR environments.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128625372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
César Daniel Rojas Ferrer, Hidehiko Shishido, I. Kitahara, Y. Kameda
This work explores the human visual exploratory activity (VEA) in a microgravity environment compared to one-G. Parabolic flights are the only way to experience microgravity without astronaut training, and the duration of each microgravity segment is less than 20 seconds. Under such special conditions, the test subject visually searches a virtual representation of the International Space Station located in his Field of Regard (FOR). The task was repeated in two different postural positions. Interestingly, the test subject reported a significant reduction of microgravity-related motion sickness while experiencing the VR simulation, in comparison to his previous parabolic flights without VR.
{"title":"Visual Exploratory Activity under Microgravity Conditions in VR: An Exploratory Study during a Parabolic Flight","authors":"César Daniel Rojas Ferrer, Hidehiko Shishido, I. Kitahara, Y. Kameda","doi":"10.1109/VR.2019.8798253","DOIUrl":"https://doi.org/10.1109/VR.2019.8798253","url":null,"abstract":"This work explores the human visual exploratory activity (VEA) in a microgravity environment compared to one-G. Parabolic flights are the only way to experience microgravity without astronaut training, and the duration of each microgravity segment is less than 20 seconds. Under such special conditions, the test subject visually searches a virtual representation of the International Space Station located in his Field of Regard (FOR). The task was repeated in two different postural positions. Interestingly, the test subject reported a significant reduction of microgravity-related motion sickness while experiencing the VR simulation, in comparison to his previous parabolic flights without VR.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128712132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper studies the characteristics of the human perception of a haptic shape-changing interface, capable of altering its size and rigidity simultaneously for presenting characteristics of virtual objects physically. The haptic interface is composed of an array of computer-controlled balloons, with two mechanisms, one for changing size and one for changing rigidity. We manufactured two balloons and conducted psychophysical experiments with twenty subjects to measure perceived sensory thresholds and haptic perception of the change of size and rigidity. The results show that subjects can correctly discriminate different conditions with an acceptable level of accuracy. Our results also suggest that the proposed system can present an ample range of rigidities and variations of the size in a way that is compatible with the human haptic perception of physical materials. Currently, shape-changing interfaces do not hold a defined position in the current VR / AR research. Our results provide basic knowledge for developing novel types of haptic interfaces that can enhance the haptic perception of virtual objects, allowing rich embodied interactions, and synchronize the virtual and the physical world through computationally-controlled materiality.
{"title":"Human Perception of a Haptic Shape-changing Interface with Variable Rigidity and Size","authors":"Alberto Boem, Yuki Enzaki, H. Yano, Hiroo Iwata","doi":"10.1109/VR.2019.8798214","DOIUrl":"https://doi.org/10.1109/VR.2019.8798214","url":null,"abstract":"This paper studies the characteristics of the human perception of a haptic shape-changing interface, capable of altering its size and rigidity simultaneously for presenting characteristics of virtual objects physically. The haptic interface is composed of an array of computer-controlled balloons, with two mechanisms, one for changing size and one for changing rigidity. We manufactured two balloons and conducted psychophysical experiments with twenty subjects to measure perceived sensory thresholds and haptic perception of the change of size and rigidity. The results show that subjects can correctly discriminate different conditions with an acceptable level of accuracy. Our results also suggest that the proposed system can present an ample range of rigidities and variations of the size in a way that is compatible with the human haptic perception of physical materials. Currently, shape-changing interfaces do not hold a defined position in the current VR / AR research. Our results provide basic knowledge for developing novel types of haptic interfaces that can enhance the haptic perception of virtual objects, allowing rich embodied interactions, and synchronize the virtual and the physical world through computationally-controlled materiality.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126143460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicholas Rewkowski, Atul Rungta, M. Whitton, M. Lin
Many virtual locomotion interfaces allowing users to move in virtual reality have been built and evaluated, such as redirected walking (RDW), walking-in-place (WIP), and joystick input. RDW has been shown to be among the most natural and immersive as it supports real walking, and many newer methods further adapt RDW to allow for customization and greater immersion. Most of these methods have been demonstrated to work with vision, in this paper we evaluate the ability for a general distractor-based RDW framework to be used with only auditory display. We conducted two studies evaluating the differences between RDW with auditory distractors and other distractor modalities using distraction ratio, virtual and physical path information, immersion, simulator sickness, and other measurements. Our results indicate that auditory RDW has the potential to be used with complex navigational tasks, such as crossing streets and avoiding obstacles. It can be used without designing the system specifically for audio-only users. Additionally, sense of presence and simulator sickness remain reasonable across all user groups.
{"title":"Evaluating the Effectiveness of Redirected Walking with Auditory Distractors for Navigation in Virtual Environments","authors":"Nicholas Rewkowski, Atul Rungta, M. Whitton, M. Lin","doi":"10.1109/VR.2019.8798286","DOIUrl":"https://doi.org/10.1109/VR.2019.8798286","url":null,"abstract":"Many virtual locomotion interfaces allowing users to move in virtual reality have been built and evaluated, such as redirected walking (RDW), walking-in-place (WIP), and joystick input. RDW has been shown to be among the most natural and immersive as it supports real walking, and many newer methods further adapt RDW to allow for customization and greater immersion. Most of these methods have been demonstrated to work with vision, in this paper we evaluate the ability for a general distractor-based RDW framework to be used with only auditory display. We conducted two studies evaluating the differences between RDW with auditory distractors and other distractor modalities using distraction ratio, virtual and physical path information, immersion, simulator sickness, and other measurements. Our results indicate that auditory RDW has the potential to be used with complex navigational tasks, such as crossing streets and avoiding obstacles. It can be used without designing the system specifically for audio-only users. Additionally, sense of presence and simulator sickness remain reasonable across all user groups.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114406269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Rajeswaran, Jeremy Varqhese, T. Kesavadas, Praveen Kumar, J. Vozenilek
Endotracheal Intubation is a lifesaving procedure in which a tube is passed through the mouth into the trachea (windpipe) to maintain an open airway and facilitate artificial respiration. It is a complex psychomotor skill, which requires significant training and experience to prevent complications. The current methods of training, including manikins and cadaver, have limitations in terms of their availability for early medical professionals to learn and practice. These training options also have limitations in terms of presenting high risk/difficult intubation cases for experts to mentally plan their approach in high-risk scenarios prior to the procedure. In this demo, we present AirwayVR: virtual reality-based simulation trainer for intubation training. Our goal is to utilize virtual reality platform for intubation skills training for two different target audience (medical professionals) with two different objectives. The first one is to use AirwayVR as an introductory platform to learn and practice intubation in virtual reality for novice learners (Medical students and residents). The second objective is to utilize this technology as a Just-in-time training platform for experts to mentally prepare for a complex case prior to the procedure.
{"title":"AirwayVR: Virtual Reality Trainer for Endotracheal Intubation","authors":"P. Rajeswaran, Jeremy Varqhese, T. Kesavadas, Praveen Kumar, J. Vozenilek","doi":"10.1109/VR.2019.8797998","DOIUrl":"https://doi.org/10.1109/VR.2019.8797998","url":null,"abstract":"Endotracheal Intubation is a lifesaving procedure in which a tube is passed through the mouth into the trachea (windpipe) to maintain an open airway and facilitate artificial respiration. It is a complex psychomotor skill, which requires significant training and experience to prevent complications. The current methods of training, including manikins and cadaver, have limitations in terms of their availability for early medical professionals to learn and practice. These training options also have limitations in terms of presenting high risk/difficult intubation cases for experts to mentally plan their approach in high-risk scenarios prior to the procedure. In this demo, we present AirwayVR: virtual reality-based simulation trainer for intubation training. Our goal is to utilize virtual reality platform for intubation skills training for two different target audience (medical professionals) with two different objectives. The first one is to use AirwayVR as an introductory platform to learn and practice intubation in virtual reality for novice learners (Medical students and residents). The second objective is to utilize this technology as a Just-in-time training platform for experts to mentally prepare for a complex case prior to the procedure.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128100470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This project explores the possibility of VR as an alternative theatre form for performing arts. For the past hundreds of years, the proscenium stage has been the most used form of stage in the performing arts. At the proscenium stage, the audience sees the dramatic facts through the frame, so it has the advantage of preventing the attention of the audience from being dispersed. However, it has the disadvantage of making the audience have a sense of distance from the stage since the world in the stage is completely separated from the world of the audience. In this 360 VR performance work, we remove the barrier between the audience and the stage allowing the audience to immerse themselves more in the performance, and experiment a new performance type where the performing is done around the audience rather than the audience surrounding performers. For this work, we used a 360 video camera (a rig of 6 GoPro cameras) to capture the stage, where a group of dancers wearing the Hanbok - the Korea traditional costume - performed the traditional dance specially choreographed for this show. This video was created to promote the beauty of the Hanbok as a more immersive approach.
{"title":"Color Space: 360 VR Hanbok Art Performance","authors":"Seonock Park, Jusub Kim","doi":"10.1109/VR.2019.8798022","DOIUrl":"https://doi.org/10.1109/VR.2019.8798022","url":null,"abstract":"This project explores the possibility of VR as an alternative theatre form for performing arts. For the past hundreds of years, the proscenium stage has been the most used form of stage in the performing arts. At the proscenium stage, the audience sees the dramatic facts through the frame, so it has the advantage of preventing the attention of the audience from being dispersed. However, it has the disadvantage of making the audience have a sense of distance from the stage since the world in the stage is completely separated from the world of the audience. In this 360 VR performance work, we remove the barrier between the audience and the stage allowing the audience to immerse themselves more in the performance, and experiment a new performance type where the performing is done around the audience rather than the audience surrounding performers. For this work, we used a 360 video camera (a rig of 6 GoPro cameras) to capture the stage, where a group of dancers wearing the Hanbok - the Korea traditional costume - performed the traditional dance specially choreographed for this show. This video was created to promote the beauty of the Hanbok as a more immersive approach.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134376647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naruki Tanabe, Y. Sato, Kohei Morita, Parinya Punpongsanon, H. Matsukura, Michiya Inagaki, D. Iwai, Yuichi Fujino, Kosuke Sato
We present fARFEEL, a remote communication system that provides visuo-haptic feedback allows a local user to feel touching distant objects. The system allows the local and remote users to communicate by using the projected virtual hand (VH) for the agency of his/her own hands. The necessary haptic information is provided to the non-manipulating hand of the local user that does not bother the manipulation of the projected VH. We also introduce the possible visual stimulus that could potentially provide the sense of the body ownership over the projected VH.
{"title":"fARFEEL: Providing Haptic Sensation of Touched Objects Using Visuo-Haptic Feedback","authors":"Naruki Tanabe, Y. Sato, Kohei Morita, Parinya Punpongsanon, H. Matsukura, Michiya Inagaki, D. Iwai, Yuichi Fujino, Kosuke Sato","doi":"10.1109/VR.2019.8798195","DOIUrl":"https://doi.org/10.1109/VR.2019.8798195","url":null,"abstract":"We present fARFEEL, a remote communication system that provides visuo-haptic feedback allows a local user to feel touching distant objects. The system allows the local and remote users to communicate by using the projected virtual hand (VH) for the agency of his/her own hands. The necessary haptic information is provided to the non-manipulating hand of the local user that does not bother the manipulation of the projected VH. We also introduce the possible visual stimulus that could potentially provide the sense of the body ownership over the projected VH.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131454422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Uwe Gruenefeld, I. Koethe, Daniel Lange, Sebastian WeirB, Wilko Heuten
Current head-mounted displays (HMDs) have a limited field-of-view (FOV). A limited FOV further decreases the already restricted human visual range and amplifies the problem of objects receding from view (e.g., opponents in computer games). However, there is no previous work that investigates how to best perceive moving out-of-view objects on head-mounted displays. In this paper, we compare two visualization approaches: (1) Overview+detail, with 3D Radar, and (2) Focus+context, with EyeSee360, in a user study to evaluate their performances for visualizing moving out-of-view objects. We found that using 3D Radar resulted in a significantly lower movement estimation error and higher usability, measured by the system usability scale. 3D Radar was also preferred by 13 out of 15 participants for visualization of moving out-of-view objects.
{"title":"Comparing Techniques for Visualizing Moving Out-of-View Objects in Head-mounted Virtual Reality","authors":"Uwe Gruenefeld, I. Koethe, Daniel Lange, Sebastian WeirB, Wilko Heuten","doi":"10.1109/VR.2019.8797725","DOIUrl":"https://doi.org/10.1109/VR.2019.8797725","url":null,"abstract":"Current head-mounted displays (HMDs) have a limited field-of-view (FOV). A limited FOV further decreases the already restricted human visual range and amplifies the problem of objects receding from view (e.g., opponents in computer games). However, there is no previous work that investigates how to best perceive moving out-of-view objects on head-mounted displays. In this paper, we compare two visualization approaches: (1) Overview+detail, with 3D Radar, and (2) Focus+context, with EyeSee360, in a user study to evaluate their performances for visualizing moving out-of-view objects. We found that using 3D Radar resulted in a significantly lower movement estimation error and higher usability, measured by the system usability scale. 3D Radar was also preferred by 13 out of 15 participants for visualization of moving out-of-view objects.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130600045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sylvia Rothe, Pascal Pothmann, Heiko Drewes, H. Hussmann
For watching omnidirectional movies via HMD, turning the head is the most relevant and natural input technique to choose the visible part of the movie. However, there is a need for additional interactivity in cinematic virtual reality (CVR), e.g. for navigating the movie, for nonlinear story lines or for communication with other viewers watching a movie together. The input device should not disturb the viewing experience and the viewer should not be primarily aware of it. We present a design space based on numerous methods in literature and our own experiences. As a result of the design space we describe interaction techniques which meet the challenges of cinematic virtual reality. For doing this, various dimensions of the design space will be combined. The most promising method, eye-based head gestures, is described in more detail and was implemented for CVR. The conducted user study will be analyzed in future work.
{"title":"Interaction Techniques for Cinematic Virtual Reality","authors":"Sylvia Rothe, Pascal Pothmann, Heiko Drewes, H. Hussmann","doi":"10.1109/VR.2019.8798189","DOIUrl":"https://doi.org/10.1109/VR.2019.8798189","url":null,"abstract":"For watching omnidirectional movies via HMD, turning the head is the most relevant and natural input technique to choose the visible part of the movie. However, there is a need for additional interactivity in cinematic virtual reality (CVR), e.g. for navigating the movie, for nonlinear story lines or for communication with other viewers watching a movie together. The input device should not disturb the viewing experience and the viewer should not be primarily aware of it. We present a design space based on numerous methods in literature and our own experiences. As a result of the design space we describe interaction techniques which meet the challenges of cinematic virtual reality. For doing this, various dimensions of the design space will be combined. The most promising method, eye-based head gestures, is described in more detail and was implemented for CVR. The conducted user study will be analyzed in future work.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132204854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Immersive Virtual Environment systems that utilize Head Mounted Displays and a large tracking area have the advantage of being able to use natural walking as a locomotion interface. In such systems, difficulties arise when the virtual world is larger than the tracking area and users approach area boundaries. Redirected walking (RDW) is a technique that distorts the correspondence between physical and virtual world motion to steer users away from boundaries and obstacles, including other co-immersed users. Recently, a RDW algorithm was proposed based on the use of artificial potential fields (APF), in which walls and obstacles repel the user. APF-RDW effectively supports multiple simultaneous users and, unlike other RDW algorithms, can easily account for tracking area dimensions and room shape when generating steering instructions. This work investigates the performance of a refined APF-RDW algorithm in different sized tracking areas and in irregularly shaped rooms, as compared to a Steer-to-Center (STC) algorithm and an un-steered control condition. Data was generated in simulation using logged paths of prior live users, and is presented for both single-user and multi-user scenarios. Results show the ability of APF-RDW to steer effectively in irregular concave shaped tracking areas such as L-shaped rooms or crosses, along with scalable multi-user support, and better performance than STC algorithms in almost all conditions.
{"title":"Effects of Tracking Area Shape and Size on Artificial Potential Field Redirected Walking","authors":"J. Messinger, Eric Hodgson, E. Bachmann","doi":"10.1109/VR.2019.8797818","DOIUrl":"https://doi.org/10.1109/VR.2019.8797818","url":null,"abstract":"Immersive Virtual Environment systems that utilize Head Mounted Displays and a large tracking area have the advantage of being able to use natural walking as a locomotion interface. In such systems, difficulties arise when the virtual world is larger than the tracking area and users approach area boundaries. Redirected walking (RDW) is a technique that distorts the correspondence between physical and virtual world motion to steer users away from boundaries and obstacles, including other co-immersed users. Recently, a RDW algorithm was proposed based on the use of artificial potential fields (APF), in which walls and obstacles repel the user. APF-RDW effectively supports multiple simultaneous users and, unlike other RDW algorithms, can easily account for tracking area dimensions and room shape when generating steering instructions. This work investigates the performance of a refined APF-RDW algorithm in different sized tracking areas and in irregularly shaped rooms, as compared to a Steer-to-Center (STC) algorithm and an un-steered control condition. Data was generated in simulation using logged paths of prior live users, and is presented for both single-user and multi-user scenarios. Results show the ability of APF-RDW to steer effectively in irregular concave shaped tracking areas such as L-shaped rooms or crosses, along with scalable multi-user support, and better performance than STC algorithms in almost all conditions.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132216485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}