Jennifer Brade, Tobias Hoppe, Sven Winkler, Philipp Klimant, Georg Jahn
{"title":"Visual cues improve spatial orientation in telepresence as in VR","authors":"Jennifer Brade, Tobias Hoppe, Sven Winkler, Philipp Klimant, Georg Jahn","doi":"10.54941/ahfe1002862","DOIUrl":null,"url":null,"abstract":"When moving in reality, successful spatial orientation is enabled\n through continuous updating of egocentric spatial relations to the\n surrounding environment. But in Virtual Reality (VR) or telepresence, cues\n of one’s own movement are rarely provided, which typically impairs spatial\n orientation. Telepresence robots are mostly operated by minimal real\n movements of the user via PC-based controls, which entail a lack of real\n translations and rotations and thus can disrupt spatial orientation. Studies\n in virtual environments show that a certain degree of spatial updating is\n possible without body-based cues to self-motion (vestibular, proprioceptive,\n motor efference) solely through continuous visual information about the\n change in orientation or additional visual landmarks. While a large number\n of studies investigated spatial orientation in virtual environments, spatial\n updating in telepresence remains largely unexplored. VR and telepresence\n environments share the common feature that the user is not physically\n located in the mediated environment and thus interacts in an environment\n that does not correspond to the body-based cues generated by posture and\n self-motion in the real environment. Despite this similarity, virtual and\n telepresence environments also have significant differences in how the\n environment is presented: common, commercially available telepresence\n systems can usually only display the environment on a 2D monitor. The 2D\n monitor impairs the operator's depth perception compared with 3D\n presentation in VR, for instance in an HMD, and interacting by means of\n mouse movements on a 2D plane is indirect compared with moving VR\n controllers and the HMD in 3D space. Thus, it cannot be assumed without\n verification that the spatial orientation in 2D telepresence systems can be\n compared with that in VR systems. Therefore, we employed a standard spatial\n orientation task with a telepresence robot to evaluate if results concerning\n the number of visual cues turn out similar to findings in VR-studies.To\n address the research question, a triangle completion task (TCT) was carried\n out using the telepresence robot Double 3. The participants (n= 30)\n controlled the telepresence robot remotely using a computer and a mouse: At\n first, they moved the robot to a specified point, then they turned the robot\n to orient towards a second specified point, moved there and were then asked\n to return the robot to its starting point. To evaluate the influence of the\n number of visual cues on the performance in the TCT, three conditions that\n varied in the amount of visual information provided for navigating the third\n leg were presented in a within-subjects design. Similar to studies that\n showed support of spatial orientation in TCT by visual cues in VR, the\n number of visual cues available while navigating the third leg supported\n triangle completion with a telepresence robot. This was confirmed by the\n trend of reduced error with more visual cues and a reliable difference\n between the conditions with sparse and many visual cues. Connecting results\n obtained in VR with telepresence and teleoperation scenarios is valuable to\n inform designing telepresence and teleoperation interfaces. We demonstrated\n that a standard task for studying spatial orientation performance is\n applicable with telepresence robots.","PeriodicalId":269162,"journal":{"name":"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 6th International Conference on Intelligent Human Systems Integration (IHSI 2023) Integrating People and Intelligent Systems, February 22–24, 2023, Venice, Italy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1002862","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
When moving in reality, successful spatial orientation is enabled
through continuous updating of egocentric spatial relations to the
surrounding environment. But in Virtual Reality (VR) or telepresence, cues
of one’s own movement are rarely provided, which typically impairs spatial
orientation. Telepresence robots are mostly operated by minimal real
movements of the user via PC-based controls, which entail a lack of real
translations and rotations and thus can disrupt spatial orientation. Studies
in virtual environments show that a certain degree of spatial updating is
possible without body-based cues to self-motion (vestibular, proprioceptive,
motor efference) solely through continuous visual information about the
change in orientation or additional visual landmarks. While a large number
of studies investigated spatial orientation in virtual environments, spatial
updating in telepresence remains largely unexplored. VR and telepresence
environments share the common feature that the user is not physically
located in the mediated environment and thus interacts in an environment
that does not correspond to the body-based cues generated by posture and
self-motion in the real environment. Despite this similarity, virtual and
telepresence environments also have significant differences in how the
environment is presented: common, commercially available telepresence
systems can usually only display the environment on a 2D monitor. The 2D
monitor impairs the operator's depth perception compared with 3D
presentation in VR, for instance in an HMD, and interacting by means of
mouse movements on a 2D plane is indirect compared with moving VR
controllers and the HMD in 3D space. Thus, it cannot be assumed without
verification that the spatial orientation in 2D telepresence systems can be
compared with that in VR systems. Therefore, we employed a standard spatial
orientation task with a telepresence robot to evaluate if results concerning
the number of visual cues turn out similar to findings in VR-studies.To
address the research question, a triangle completion task (TCT) was carried
out using the telepresence robot Double 3. The participants (n= 30)
controlled the telepresence robot remotely using a computer and a mouse: At
first, they moved the robot to a specified point, then they turned the robot
to orient towards a second specified point, moved there and were then asked
to return the robot to its starting point. To evaluate the influence of the
number of visual cues on the performance in the TCT, three conditions that
varied in the amount of visual information provided for navigating the third
leg were presented in a within-subjects design. Similar to studies that
showed support of spatial orientation in TCT by visual cues in VR, the
number of visual cues available while navigating the third leg supported
triangle completion with a telepresence robot. This was confirmed by the
trend of reduced error with more visual cues and a reliable difference
between the conditions with sparse and many visual cues. Connecting results
obtained in VR with telepresence and teleoperation scenarios is valuable to
inform designing telepresence and teleoperation interfaces. We demonstrated
that a standard task for studying spatial orientation performance is
applicable with telepresence robots.