We report on a user study evaluating Redirected Free Exploration with Distractors (RFED), a large-scale, real-walking, locomotion interface, by comparing it to Walking-in-Place (WIP) and Joystick (JS), two common locomotion interfaces. The between-subjects study compared navigation ability in RFED, WIP, and JS interfaces in VEs that are more than two times the dimensions of the tracked space. The interfaces were evaluated based on navigation and wayfinding metrics and results suggest that participants using RFED were significantly better at navigating and wayfinding through virtual mazes than participants using walking-in-place and joystick interfaces. Participants traveled shorter distances, made fewer wrong turns, pointed to hidden targets more accurately and more quickly, and were able to place and label targets on maps more accurately. Moreover, RFED participants were able to more accurately estimate VE size.
{"title":"An Evaluation of Navigational Ability Comparing Redirected Free Exploration with Distractors to Walking-in-Place and Joystick Locomotion Interfaces.","authors":"Tabitha C Peck, Henry Fuchs, Mary C Whitton","doi":"10.1109/VR.2011.5759437","DOIUrl":"https://doi.org/10.1109/VR.2011.5759437","url":null,"abstract":"<p><p>We report on a user study evaluating Redirected Free Exploration with Distractors (RFED), a large-scale, real-walking, locomotion interface, by comparing it to Walking-in-Place (WIP) and Joystick (JS), two common locomotion interfaces. The between-subjects study compared navigation ability in RFED, WIP, and JS interfaces in VEs that are more than two times the dimensions of the tracked space. The interfaces were evaluated based on navigation and wayfinding metrics and results suggest that participants using RFED were significantly better at navigating and wayfinding through virtual mazes than participants using walking-in-place and joystick interfaces. Participants traveled shorter distances, made fewer wrong turns, pointed to hidden targets more accurately and more quickly, and were able to place and label targets on maps more accurately. Moreover, RFED participants were able to more accurately estimate VE size.</p>","PeriodicalId":89616,"journal":{"name":"Proceedings. IEEE Virtual Reality Conference","volume":" ","pages":"55-62"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/VR.2011.5759437","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30429450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kaloian Petkov, Charilaos Papadopoulos, Min Zhang, Arie E Kaufman, Xianfeng Gu
Current immersive VR systems such as the CAVE provide an effective platform for the immersive exploration of large 3D data. A major limitation is that in most cases at least one display surface is missing due to space, access or cost constraints. This partially-immersive visualization results in a substantial loss of visual information that may be acceptable for some applications, however it becomes a major obstacle for critical tasks, such as the analysis of medical data. We propose a conformal deformation rendering pipeline for the visualization of datasets on partially-immersive platforms. The angle-preserving conformal mapping approach is used to map the 360°3D view volume to arbitrary display configurations. It has the desirable property of preserving shapes under distortion, which is important for identifying features, especially in medical data. The conformal mapping is used for rasterization, realtime raytracing and volume rendering of the datasets. Since the technique is applied during the rendering, we can construct stereoscopic images from the data, which is usually not true for image-based distortion approaches. We demonstrate the stereo conformal mapping rendering pipeline in the partially-immersive 5-wall Immersive Cabin (IC) for virtual colonoscopy and architectural review.
{"title":"Conformal Visualization for Partially-Immersive Platforms.","authors":"Kaloian Petkov, Charilaos Papadopoulos, Min Zhang, Arie E Kaufman, Xianfeng Gu","doi":"10.1109/VR.2011.5759453","DOIUrl":"https://doi.org/10.1109/VR.2011.5759453","url":null,"abstract":"<p><p>Current immersive VR systems such as the CAVE provide an effective platform for the immersive exploration of large 3D data. A major limitation is that in most cases at least one display surface is missing due to space, access or cost constraints. This partially-immersive visualization results in a substantial loss of visual information that may be acceptable for some applications, however it becomes a major obstacle for critical tasks, such as the analysis of medical data. We propose a conformal deformation rendering pipeline for the visualization of datasets on partially-immersive platforms. The angle-preserving conformal mapping approach is used to map the 360°3D view volume to arbitrary display configurations. It has the desirable property of preserving shapes under distortion, which is important for identifying features, especially in medical data. The conformal mapping is used for rasterization, realtime raytracing and volume rendering of the datasets. Since the technique is applied during the rendering, we can construct stereoscopic images from the data, which is usually not true for image-based distortion approaches. We demonstrate the stereo conformal mapping rendering pipeline in the partially-immersive 5-wall Immersive Cabin (IC) for virtual colonoscopy and architectural review.</p>","PeriodicalId":89616,"journal":{"name":"Proceedings. IEEE Virtual Reality Conference","volume":" ","pages":"143-150"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/VR.2011.5759453","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33926520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Users in virtual environments often find navigation more difficult than in the real world. Our new locomotion interface, Improved Redirection with Distractors (IRD), enables users to walk in larger-than-tracked space VEs without predefined waypoints. We compared IRD to the current best interface, really walking, by conducting a user study measuring navigational ability. Our results show that IRD users can really walk through VEs that are larger than the tracked space and can point to targets and complete maps of VEs no worse than when really walking.
{"title":"Improved Redirection with Distractors: A Large-Scale-Real-Walking Locomotion Interface and its Effect on Navigation in Virtual Environments.","authors":"Tabitha C Peck, Henry Fuchs, Mary C Whitton","doi":"10.1109/VR.2010.5444816","DOIUrl":"https://doi.org/10.1109/VR.2010.5444816","url":null,"abstract":"<p><p>Users in virtual environments often find navigation more difficult than in the real world. Our new locomotion interface, Improved Redirection with Distractors (IRD), enables users to walk in larger-than-tracked space VEs without predefined waypoints. We compared IRD to the current best interface, really walking, by conducting a user study measuring navigational ability. Our results show that IRD users can really walk through VEs that are larger than the tracked space and can point to targets and complete maps of VEs no worse than when really walking.</p>","PeriodicalId":89616,"journal":{"name":"Proceedings. IEEE Virtual Reality Conference","volume":"2010 ","pages":"35-38"},"PeriodicalIF":0.0,"publicationDate":"2010-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/VR.2010.5444816","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32842238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeremy D Wendt, Mary C Whitton, Frederick P Brooks
Many Virtual Environments require walking interfaces to explore virtual worlds much larger than available real-world tracked space. We present a model for generating virtual locomotion speeds from Walking-In-Place (WIP) inputs based on walking biomechanics. By employing gait principles, our model - called Gait-Understanding-Driven Walking-In-Place (GUD WIP) - creates output speeds which better match those evident in Real Walking, and which better respond to variations in step frequency, including realistic starting and stopping. The speeds output by our implementation demonstrate considerably less within-step fluctuation than a good current WIP system - Low-Latency, Continuous-Motion (LLCM) WIP - while still remaining responsive to changes in user input. We compared resulting speeds from Real Walking, GUD WIP, and LLCM-WIP via user study: The average output speeds for Real Walking and GUD WIP respond consistently with changing step frequency - LLCM-WIP is far less consistent. GUD WIP produces output speeds that are more locally consistent (smooth) and step-frequency-to-walk-speed consistent than LLCM-WIP.
{"title":"GUD WIP: Gait-Understanding-Driven Walking-In-Place.","authors":"Jeremy D Wendt, Mary C Whitton, Frederick P Brooks","doi":"10.1109/VR.2010.5444812","DOIUrl":"https://doi.org/10.1109/VR.2010.5444812","url":null,"abstract":"<p><p>Many Virtual Environments require walking interfaces to explore virtual worlds much larger than available real-world tracked space. We present a model for generating virtual locomotion speeds from Walking-In-Place (WIP) inputs based on walking biomechanics. By employing gait principles, our model - called Gait-Understanding-Driven Walking-In-Place (GUD WIP) - creates output speeds which better match those evident in Real Walking, and which better respond to variations in step frequency, including realistic starting and stopping. The speeds output by our implementation demonstrate considerably less within-step fluctuation than a good current WIP system - Low-Latency, Continuous-Motion (LLCM) WIP - while still remaining responsive to changes in user input. We compared resulting speeds from Real Walking, GUD WIP, and LLCM-WIP via user study: The average output speeds for Real Walking and GUD WIP respond consistently with changing step frequency - LLCM-WIP is far less consistent. GUD WIP produces output speeds that are more locally consistent (smooth) and step-frequency-to-walk-speed consistent than LLCM-WIP.</p>","PeriodicalId":89616,"journal":{"name":"Proceedings. IEEE Virtual Reality Conference","volume":"2010 ","pages":"51-58"},"PeriodicalIF":0.0,"publicationDate":"2010-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/VR.2010.5444812","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33003113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As users of head-tracked head-mounted display systems move their heads, latency causes unnatural scene motion. We 1) analyzed scene motion due to latency and head motion, 2) developed a mathematical model relating latency, head motion, scene motion, and perception thresholds, 3) developed procedures to determine perceptual thresholds of scene-velocity and latency without the need for a head-mounted display or a low-latency system, and 4), for six subjects under a specific set of conditions, we measured scene-velocity and latency thresholds, and compared the relationship between these thresholds. Resulting PSEs and JNDs of latency thresholds are in the same range of Ellis and Adelstein. The results are a step toward enabling scientists and engineers to determine latency requirements before building immersive virtual environments using head-mounted display systems.
{"title":"Relating Scene-Motion Thresholds to Latency Thresholds for Head-Mounted Displays.","authors":"Jason Jerald, Mary Whitton","doi":"10.1109/VR.2009.4811025","DOIUrl":"https://doi.org/10.1109/VR.2009.4811025","url":null,"abstract":"<p><p>As users of head-tracked head-mounted display systems move their heads, latency causes unnatural scene motion. We 1) analyzed scene motion due to latency and head motion, 2) developed a mathematical model relating latency, head motion, scene motion, and perception thresholds, 3) developed procedures to determine perceptual thresholds of scene-velocity and latency without the need for a head-mounted display or a low-latency system, and 4), for six subjects under a specific set of conditions, we measured scene-velocity and latency thresholds, and compared the relationship between these thresholds. Resulting PSEs and JNDs of latency thresholds are in the same range of Ellis and Adelstein. The results are a step toward enabling scientists and engineers to determine latency requirements before building immersive virtual environments using head-mounted display systems.</p>","PeriodicalId":89616,"journal":{"name":"Proceedings. IEEE Virtual Reality Conference","volume":" ","pages":"211-218"},"PeriodicalIF":0.0,"publicationDate":"2009-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/VR.2009.4811025","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40093942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}