Windshield displays (WSDs) are a promising new technology to augment the entire windscreen with additional information about vehicle state, highlight critical objects in the surrounding, or serve as replacement for conventional displays. Typically, augmentation is provided in a screen-fixed manner as overlay on the windscreen. However, it is unclear to date if this is optimal in terms of usability/UX. In this work, we propose ”StickyWSD” – a world-fixed positioning strategy – and evaluate its impact on quantitative measures compared to screen-fixed positioning. Results from a user study conducted in a virtual reality driving simulator (N = 23) suggest that the dynamic world-fixed positioning technique shows increased performance and lowered error rates as well as take-over times. We conclude that the ”StickyWSD” approach offers lot of potential for WSDs that should be researched further.
{"title":"Towards Dynamic Positioning of Text Content on a Windshield Display for Automated Driving","authors":"Andreas Riegler, A. Riener, Clemens Holzmann","doi":"10.1145/3359996.3364757","DOIUrl":"https://doi.org/10.1145/3359996.3364757","url":null,"abstract":"Windshield displays (WSDs) are a promising new technology to augment the entire windscreen with additional information about vehicle state, highlight critical objects in the surrounding, or serve as replacement for conventional displays. Typically, augmentation is provided in a screen-fixed manner as overlay on the windscreen. However, it is unclear to date if this is optimal in terms of usability/UX. In this work, we propose ”StickyWSD” – a world-fixed positioning strategy – and evaluate its impact on quantitative measures compared to screen-fixed positioning. Results from a user study conducted in a virtual reality driving simulator (N = 23) suggest that the dynamic world-fixed positioning technique shows increased performance and lowered error rates as well as take-over times. We conclude that the ”StickyWSD” approach offers lot of potential for WSDs that should be researched further.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121718104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The combination of room-scale virtual reality and non-isometric virtual walking techniques is promising-the former provides a comfortable and natural VR experience, while the latter relaxes the constraint of the physical space surrounding the user. In the last few decades, many non-isometric virtual walking techniques have been proposed to enable unconstrained walking without disrupting the sense of presence in the VR environment. Nevertheless, many works reported the occurrence of VR sickness near the detection threshold or after prolonged use. There exists a knowledge gap on the level of VR sickness and gait performance for amplified non-isometric virtual walking at well beyond the detection threshold. This paper presents an experiment with 17 participants that investigated VR sickness and gait parameters during non-isometric virtual walking at large and detectable translational gain levels. The result showed that the translational gain level had a significant effect on the reported sickness score, gait parameters, and center of mass displacements. Surprisingly, participants who did not experience motion sickness symptoms at the end of the experiment adapted to the non-isometric virtual walking well and even showed improved performance at a large gain level of 10 x .
{"title":"Analysis of VR Sickness and Gait Parameters During Non-Isometric Virtual Walking with Large Translational Gain","authors":"C. A. T. Cortes, Hsiang-Ting Chen, Chin-Teng Lin","doi":"10.1145/3359996.3364741","DOIUrl":"https://doi.org/10.1145/3359996.3364741","url":null,"abstract":"The combination of room-scale virtual reality and non-isometric virtual walking techniques is promising-the former provides a comfortable and natural VR experience, while the latter relaxes the constraint of the physical space surrounding the user. In the last few decades, many non-isometric virtual walking techniques have been proposed to enable unconstrained walking without disrupting the sense of presence in the VR environment. Nevertheless, many works reported the occurrence of VR sickness near the detection threshold or after prolonged use. There exists a knowledge gap on the level of VR sickness and gait performance for amplified non-isometric virtual walking at well beyond the detection threshold. This paper presents an experiment with 17 participants that investigated VR sickness and gait parameters during non-isometric virtual walking at large and detectable translational gain levels. The result showed that the translational gain level had a significant effect on the reported sickness score, gait parameters, and center of mass displacements. Surprisingly, participants who did not experience motion sickness symptoms at the end of the experiment adapted to the non-isometric virtual walking well and even showed improved performance at a large gain level of 10 x .","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123839128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arsen Abdulali, Ibragim R. Atadjanov, Seungkyu Lee, Seokhee Jeon
In this paper, we propose a measurement-based modeling framework for hyper-elastic material identification and real-time haptic rendering. We build a custom data collection setup that captures shape deformation and response forces during compressive deformation of cylindrical material samples. We collected training and testing sets of data from four silicone objects having various material profiles. We design an objective function for material parameter identification by incorporating both shape deformation and reactive forces and utilize a genetic algorithm. We adopted an optimization-based Finite Element Method (FEM) for object deformation rendering. The numerical error of simulated forces was found to be perceptually negligible.
{"title":"Measurement-based Hyper-elastic Material Identification and Real-time FEM Simulation for Haptic Rendering","authors":"Arsen Abdulali, Ibragim R. Atadjanov, Seungkyu Lee, Seokhee Jeon","doi":"10.1145/3359996.3364275","DOIUrl":"https://doi.org/10.1145/3359996.3364275","url":null,"abstract":"In this paper, we propose a measurement-based modeling framework for hyper-elastic material identification and real-time haptic rendering. We build a custom data collection setup that captures shape deformation and response forces during compressive deformation of cylindrical material samples. We collected training and testing sets of data from four silicone objects having various material profiles. We design an objective function for material parameter identification by incorporating both shape deformation and reactive forces and utilize a genetic algorithm. We adopted an optimization-based Finite Element Method (FEM) for object deformation rendering. The numerical error of simulated forces was found to be perceptually negligible.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122869202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Redirected walking allows for natural locomotion in virtual environments that are larger than a user’s physical environment. The mapping between real and virtual motion is modified by scaling some aspect of motion. As a user traverses the virtual environment these modifications (or gains) must be dynamically adjusted to prevent collision with physical obstacles. A significant body of work has established perceptual thresholds on rates of absolute gain, but the effect of changing gain is little understood. We present the results of a user study on the effects of rate of gain change. A psychophysical experiment was conducted with 21 participants. Each participant completed a series of two-alternative forced choice tasks in which they determined whether their virtual motion differed from their physical motion while experiencing one of three different methods of gain change: sudden gain change, slow gain change and constant gain. Gain thresholds were determined by 3 interleaved 2-up 1-down staircases, one per condition. Our results indicate that slow gain change is significantly harder to detect than sudden gain change.
{"title":"Sensitivity to Rate of Change in Gains Applied by Redirected Walking","authors":"Ben J. Congdon, A. Steed","doi":"10.1145/3359996.3364277","DOIUrl":"https://doi.org/10.1145/3359996.3364277","url":null,"abstract":"Redirected walking allows for natural locomotion in virtual environments that are larger than a user’s physical environment. The mapping between real and virtual motion is modified by scaling some aspect of motion. As a user traverses the virtual environment these modifications (or gains) must be dynamically adjusted to prevent collision with physical obstacles. A significant body of work has established perceptual thresholds on rates of absolute gain, but the effect of changing gain is little understood. We present the results of a user study on the effects of rate of gain change. A psychophysical experiment was conducted with 21 participants. Each participant completed a series of two-alternative forced choice tasks in which they determined whether their virtual motion differed from their physical motion while experiencing one of three different methods of gain change: sudden gain change, slow gain change and constant gain. Gain thresholds were determined by 3 interleaved 2-up 1-down staircases, one per condition. Our results indicate that slow gain change is significantly harder to detect than sudden gain change.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132939541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today there is a high variety of haptic devices capable of providing tactile feedback. Although most of existing designs are aimed at realistic simulation of the surface properties, their capabilities are limited in attempts of displaying shape and position of virtual objects. This paper suggests a new concept of distributed haptic display for realistic interaction with virtual object of complex shape by a collaborative robot with shape display end-effector. MirrorShape renders the 3D object in virtual reality (VR) system by contacting the user hands with the robot end-effector at the calculated point in real-time. Our proposed system makes it possible to synchronously merge the position of contact point in VR and end-effector in real world. This feature provides presentation of different shapes, and at the same time expands the working area comparing to desktop solutions. The preliminary user study revealed that MirrorShape was effective at reducing positional error in VR interactions. Potentially this approach can be used in the virtual systems for rendering versatile VR objects with wide range of sizes with high fidelity large-scale shape experience.
{"title":"Development of MirrorShape: High Fidelity Large-Scale Shape Rendering Framework for Virtual Reality","authors":"A. Fedoseev, N. Chernyadev, D. Tsetserukou","doi":"10.1145/3359996.3365049","DOIUrl":"https://doi.org/10.1145/3359996.3365049","url":null,"abstract":"Today there is a high variety of haptic devices capable of providing tactile feedback. Although most of existing designs are aimed at realistic simulation of the surface properties, their capabilities are limited in attempts of displaying shape and position of virtual objects. This paper suggests a new concept of distributed haptic display for realistic interaction with virtual object of complex shape by a collaborative robot with shape display end-effector. MirrorShape renders the 3D object in virtual reality (VR) system by contacting the user hands with the robot end-effector at the calculated point in real-time. Our proposed system makes it possible to synchronously merge the position of contact point in VR and end-effector in real world. This feature provides presentation of different shapes, and at the same time expands the working area comparing to desktop solutions. The preliminary user study revealed that MirrorShape was effective at reducing positional error in VR interactions. Potentially this approach can be used in the virtual systems for rendering versatile VR objects with wide range of sizes with high fidelity large-scale shape experience.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134562436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haptic retargeting is a virtual reality (VR) interaction technique enabling virtual objects to be ”remapped” to different haptic proxies by offsetting the user’s virtual hand from their physical hand. While researchers have investigated single-hand retargeting, the effects of bimanual interaction in the context of haptic retargeting have been less explored. In this study, we present an evaluation of perceptual detection rates for bimanual haptic retargeting in VR. We tested 64 combinations of simultaneous left- and right-hand retargeting ranging from − 24° to + 24° offsets and found that bimanual retargeting can be more noticeable to users when the hands are redirected in different directions as opposed to the same direction.
{"title":"Investigating the Detection of Bimanual Haptic Retargeting in Virtual Reality","authors":"Eric J. Gonzalez, Sean Follmer","doi":"10.1145/3359996.3364248","DOIUrl":"https://doi.org/10.1145/3359996.3364248","url":null,"abstract":"Haptic retargeting is a virtual reality (VR) interaction technique enabling virtual objects to be ”remapped” to different haptic proxies by offsetting the user’s virtual hand from their physical hand. While researchers have investigated single-hand retargeting, the effects of bimanual interaction in the context of haptic retargeting have been less explored. In this study, we present an evaluation of perceptual detection rates for bimanual haptic retargeting in VR. We tested 64 combinations of simultaneous left- and right-hand retargeting ranging from − 24° to + 24° offsets and found that bimanual retargeting can be more noticeable to users when the hands are redirected in different directions as opposed to the same direction.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114442617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual Reality experiences today are majorly based on horizontal locomotion. In these experiences, movement in the virtual space is accomplished using teleportation, gaze input or tracking in physical space which is limited to a certain extent. Our work focuses on intuitive interactions for vertical locomotion involving both hands and feet. Such an instance of vertical locomotion is - ladder climbing. In this paper, we present an interaction technique for climbing a ladder in Virtual Reality (VR). This technique is derived from the natural motions of the limbs while climbing a ladder in reality, adhering to safe climbing practices. The developed interaction can be used in training experiences as well as gaming experiences. Preliminary evaluation of our interaction technique showed positive results across dimensions like - learnability, natural mapping, and intuitiveness.
{"title":"Vertical Locomotion in VR Using Full Body Gestures","authors":"Vineet Kamboj, Tuhin Bhuyan, Jayesh S. Pillai","doi":"10.1145/3359996.3364770","DOIUrl":"https://doi.org/10.1145/3359996.3364770","url":null,"abstract":"Virtual Reality experiences today are majorly based on horizontal locomotion. In these experiences, movement in the virtual space is accomplished using teleportation, gaze input or tracking in physical space which is limited to a certain extent. Our work focuses on intuitive interactions for vertical locomotion involving both hands and feet. Such an instance of vertical locomotion is - ladder climbing. In this paper, we present an interaction technique for climbing a ladder in Virtual Reality (VR). This technique is derived from the natural motions of the limbs while climbing a ladder in reality, adhering to safe climbing practices. The developed interaction can be used in training experiences as well as gaming experiences. Preliminary evaluation of our interaction technique showed positive results across dimensions like - learnability, natural mapping, and intuitiveness.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123142503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mudslide education is important for children. In this study, a design-based research approach was used to develop an educational VR mudslide game for children. Eleven children participated in the usability evaluations. The results indicated the importance of intuitive, easy-to-learn controls. Six major refinements of the VR mudslide game were made to increase usabilities. Feedback from the participants will guide future game refinements to increase users’ engagement and interaction.
{"title":"Preliminary Evaluation of the Usability of a Virtual Reality Game for Mudslide Education for Children","authors":"Mengping Tsuei, Jen-I Chiu, Tsu-Wei Peng, Yuan-Chen Chang","doi":"10.1145/3359996.3364710","DOIUrl":"https://doi.org/10.1145/3359996.3364710","url":null,"abstract":"Mudslide education is important for children. In this study, a design-based research approach was used to develop an educational VR mudslide game for children. Eleven children participated in the usability evaluations. The results indicated the importance of intuitive, easy-to-learn controls. Six major refinements of the VR mudslide game were made to increase usabilities. Feedback from the participants will guide future game refinements to increase users’ engagement and interaction.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121945880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current consumer virtual reality applications typically represent the user by an avatar comprising a simple head/torso and decoupled hands. In the prior work of Steed et al. it was shown that the presence or absence of an avatar could have a significant impact on the cognitive load of the user. We extend that work in two ways. First they only used a full-body avatar with articulated arms, so we add a condition with hands-only representation similar to the majority of current consumer applications. Second we provide a real-world benchmark so as to start to get at the impact of using any immersive system. We validate the prior results: real and full body avatar performance on a memory task is significantly better than no avatar. However the hands only condition is not significantly different than either these two extremes. We discuss why this might be, in particular we discuss the potential for a individual variation in response to the embodiment level.
{"title":"Avatar Type Affects Performance of Cognitive Tasks in Virtual Reality","authors":"Ye Pan, A. Steed","doi":"10.1145/3359996.3364270","DOIUrl":"https://doi.org/10.1145/3359996.3364270","url":null,"abstract":"Current consumer virtual reality applications typically represent the user by an avatar comprising a simple head/torso and decoupled hands. In the prior work of Steed et al. it was shown that the presence or absence of an avatar could have a significant impact on the cognitive load of the user. We extend that work in two ways. First they only used a full-body avatar with articulated arms, so we add a condition with hands-only representation similar to the majority of current consumer applications. Second we provide a real-world benchmark so as to start to get at the impact of using any immersive system. We validate the prior results: real and full body avatar performance on a memory task is significantly better than no avatar. However the hands only condition is not significantly different than either these two extremes. We discuss why this might be, in particular we discuss the potential for a individual variation in response to the embodiment level.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120926813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we present an augmented reality (AR) approach for position based service using a smartphone in an indoor environment. The AR method, combined with position estimation, provides a user with a smartphone with a service that is specific to a particular position without using a marker or any other hardware device. The position in an indoor environment is estimated using an IMU sensor only in the smartphone. The accuracy of the position and heading direction of the user is improved by integrating the values from the accelerometer and the gyro using Principal Component Analysis(PCA) and Extended Kalman Filter(EKF). Then, a drift noise of the estimated position is reduced by a registration step performed at a specific position. The estimated position is given to the position based service, which is provided to the user on the smartphone screen through AR. The concept of the proposed method is demonstrated with some examples.
{"title":"Augmented Reality Approach For Position-based Service using Handheld Smartphone","authors":"Jihoon Park, Sangmin Park, K. Ko","doi":"10.1145/3359996.3364712","DOIUrl":"https://doi.org/10.1145/3359996.3364712","url":null,"abstract":"In this work, we present an augmented reality (AR) approach for position based service using a smartphone in an indoor environment. The AR method, combined with position estimation, provides a user with a smartphone with a service that is specific to a particular position without using a marker or any other hardware device. The position in an indoor environment is estimated using an IMU sensor only in the smartphone. The accuracy of the position and heading direction of the user is improved by integrating the values from the accelerometer and the gyro using Principal Component Analysis(PCA) and Extended Kalman Filter(EKF). Then, a drift noise of the estimated position is reduced by a registration step performed at a specific position. The estimated position is given to the position based service, which is provided to the user on the smartphone screen through AR. The concept of the proposed method is demonstrated with some examples.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126655419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}