Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00081
Tongyu Nie, I. Adhanom, Evan Suma Rosenberg
The decoupled relationship between the optical and inertial information in virtual reality is commonly acknowledged as a major factor contributing to cybersickness. Based on laws of physics, we noticed that a slope naturally affords acceleration, and the gravito-inertial force we experience when we are accelerating freely on a slope has the same relative direction and approximately the same magnitude as the gravity we experience when standing on the ground. This provides the opportunity to simulate a slope by manipulating the orientation of virtual objects accordingly with the accelerating optical flow. In this paper, we present a novel space deformation technique that deforms the virtual environment to replicate the structure of a slope when the user accelerates virtually. As a result, we can restore the physical relationship between the optical and inertial information available to the user. However, the changes to the geometry of the virtual environment during space deformation remain perceptible to users. Consequently, we created two different transition effects, pinch and tilt, which provide different visual experiences of ground bending. A human subject study (N=87) was conducted to evaluate the effects of space deformation on both slope perception and cyber-sickness. The results confirmed that the proposed technique created a strong feeling of traveling on a slope, but no significant differences were found on measures of discomfort and cybersickness.
{"title":"Like a Rolling Stone: Effects of Space Deformation During Linear Acceleration on Slope Perception and Cybersickness","authors":"Tongyu Nie, I. Adhanom, Evan Suma Rosenberg","doi":"10.1109/VR55154.2023.00081","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00081","url":null,"abstract":"The decoupled relationship between the optical and inertial information in virtual reality is commonly acknowledged as a major factor contributing to cybersickness. Based on laws of physics, we noticed that a slope naturally affords acceleration, and the gravito-inertial force we experience when we are accelerating freely on a slope has the same relative direction and approximately the same magnitude as the gravity we experience when standing on the ground. This provides the opportunity to simulate a slope by manipulating the orientation of virtual objects accordingly with the accelerating optical flow. In this paper, we present a novel space deformation technique that deforms the virtual environment to replicate the structure of a slope when the user accelerates virtually. As a result, we can restore the physical relationship between the optical and inertial information available to the user. However, the changes to the geometry of the virtual environment during space deformation remain perceptible to users. Consequently, we created two different transition effects, pinch and tilt, which provide different visual experiences of ground bending. A human subject study (N=87) was conducted to evaluate the effects of space deformation on both slope perception and cyber-sickness. The results confirmed that the proposed technique created a strong feeling of traveling on a slope, but no significant differences were found on measures of discomfort and cybersickness.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129330018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00037
K. Ito, Juro Hosoi, Yuki Ban, Takayuki Kikuchi, Kyosuke Nakagawa, Hanako Kitagawa, Chizuru Murakami, Yosuke Imai, S. Warisawa
The development of methods to simulate the sensation of wind that can promote relaxation and elicit positive emotional responses has become a topic of interest with the widespread adoption of virtual and augmented reality systems. Previous studies have simulated natural wind by varying wind speed in a controlled environment or moving a large flow of air through an area. In contrast to such approaches to modulate physical airflow, the use of multisensory stimuli to alter the impression and sense of comfort provided by a simulated wind has rarely been considered in previous research. If visual and auditory stimuli affect wind comfort, a multisensory design should be considered for relaxation systems that use wind effects. Therefore, we experimentally measured wind comfort and associated emotions when participants experienced outdoor and indoor virtual environments through immersive virtual reality to investigate whether cross-modal effects of variations in audio-visual stimuli would impact the relaxation effects associated with a virtual wind. The results show that the virtual environment of an outdoor meadow and the sound of natural wind significantly improved users' subjective experience of comfort and openness associated with the wind, as well as their emotional state. Simulated natural wind reduced mental stress compared to a condition without wind, as shown by questionnaires and biometric data. The results of this study indicate that multisensory stimuli conveying natural impressions and simulated natural wind are effective for wind-based relaxation.
{"title":"Wind comfort and emotion can be changed by the cross-modal presentation of audio-visual stimuli of indoor and outdoor environments","authors":"K. Ito, Juro Hosoi, Yuki Ban, Takayuki Kikuchi, Kyosuke Nakagawa, Hanako Kitagawa, Chizuru Murakami, Yosuke Imai, S. Warisawa","doi":"10.1109/VR55154.2023.00037","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00037","url":null,"abstract":"The development of methods to simulate the sensation of wind that can promote relaxation and elicit positive emotional responses has become a topic of interest with the widespread adoption of virtual and augmented reality systems. Previous studies have simulated natural wind by varying wind speed in a controlled environment or moving a large flow of air through an area. In contrast to such approaches to modulate physical airflow, the use of multisensory stimuli to alter the impression and sense of comfort provided by a simulated wind has rarely been considered in previous research. If visual and auditory stimuli affect wind comfort, a multisensory design should be considered for relaxation systems that use wind effects. Therefore, we experimentally measured wind comfort and associated emotions when participants experienced outdoor and indoor virtual environments through immersive virtual reality to investigate whether cross-modal effects of variations in audio-visual stimuli would impact the relaxation effects associated with a virtual wind. The results show that the virtual environment of an outdoor meadow and the sound of natural wind significantly improved users' subjective experience of comfort and openness associated with the wind, as well as their emotional state. Simulated natural wind reduced mental stress compared to a condition without wind, as shown by questionnaires and biometric data. The results of this study indicate that multisensory stimuli conveying natural impressions and simulated natural wind are effective for wind-based relaxation.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114637717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00040
Carolin Stellmacher, André Zenner, Oscar Ariza, E. Kruijff, Johannes Schöning
Handheld virtual reality (VR) controllers enable users to manipulate virtual objects in VR but do not convey a virtual object's weight. This hinders users from effectively experiencing lighter and heavier objects. While previous work explored either hardware-based interfaces or software-based pseudo-haptics, in this paper, we combine two techniques to improve the virtual weight perception in VR. By adapting the trigger resistance of the VR controller when grasping a virtual object and manipulating the control-display (C/D) ratio during lifting, we create a continuous weight sensation. In a psychophysical study (N=29), we compared our combined approach against the individual rendering techniques. Our results show that participants were significantly more sensitive towards smaller weight differences in the combined weight simulations compared to the individual methods. Additionally, participants were also able to determine weight differences significantly faster with both cues present compared to the single pseudo-haptic technique. While all three techniques were generally valued to be effective, the combined approach was favoured the most. Our findings demonstrate the meaningful benefit of combining physical and virtual techniques for virtual weight rendering over previously proposed methods.
{"title":"Continuous VR Weight Illusion by Combining Adaptive Trigger Resistance and Control-Display Ratio Manipulation","authors":"Carolin Stellmacher, André Zenner, Oscar Ariza, E. Kruijff, Johannes Schöning","doi":"10.1109/VR55154.2023.00040","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00040","url":null,"abstract":"Handheld virtual reality (VR) controllers enable users to manipulate virtual objects in VR but do not convey a virtual object's weight. This hinders users from effectively experiencing lighter and heavier objects. While previous work explored either hardware-based interfaces or software-based pseudo-haptics, in this paper, we combine two techniques to improve the virtual weight perception in VR. By adapting the trigger resistance of the VR controller when grasping a virtual object and manipulating the control-display (C/D) ratio during lifting, we create a continuous weight sensation. In a psychophysical study (N=29), we compared our combined approach against the individual rendering techniques. Our results show that participants were significantly more sensitive towards smaller weight differences in the combined weight simulations compared to the individual methods. Additionally, participants were also able to determine weight differences significantly faster with both cues present compared to the single pseudo-haptic technique. While all three techniques were generally valued to be effective, the combined approach was favoured the most. Our findings demonstrate the meaningful benefit of combining physical and virtual techniques for virtual weight rendering over previously proposed methods.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121440565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00058
Xin Yi, Xueyang Wang, Jiaqi Li, Hewu Li
Linear hand movement in mid-air is one of the most fundamental interactions in virtual reality (e.g., when dragging/scaling/manipulating objects and drawing shapes). However, the lack of tactile feedback makes it difficult to precisely control the direction and amplitude of hand movement. In this paper, we conducted three user studies to progressively examine users' ability of fine motor control in 3D linear hand movement tasks. In Study 1, we examined participants' behavioural patterns when drawing straight lines in various directions and lengths, using both the hand and the controller. Results showed that the exhibited stroke length tended to be longer than perceived, regardless of the interaction tool. While displaying the trajectory could help reduce directional and length errors. In Study 2, we further tested the effect of different visual references and found that, compared with an empty room or cluttered scenarios, providing only a virtual table yielded higher input precision and user preference. In Study 3, we repeated Study 2 in real dragging and scaling tasks and verified the generalizability of the findings in terms of input error. Our core finding is that the user's hand moves significantly longer than the task length due to the underestimation of stroke length, yet the error of the Z-axis movement is smaller than that of the X-axis and the Y-axis, and a simple virtual desktop can effectively reduce errors.
{"title":"Examining the Fine Motor Control Ability of Linear Hand Movement in Virtual Reality","authors":"Xin Yi, Xueyang Wang, Jiaqi Li, Hewu Li","doi":"10.1109/VR55154.2023.00058","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00058","url":null,"abstract":"Linear hand movement in mid-air is one of the most fundamental interactions in virtual reality (e.g., when dragging/scaling/manipulating objects and drawing shapes). However, the lack of tactile feedback makes it difficult to precisely control the direction and amplitude of hand movement. In this paper, we conducted three user studies to progressively examine users' ability of fine motor control in 3D linear hand movement tasks. In Study 1, we examined participants' behavioural patterns when drawing straight lines in various directions and lengths, using both the hand and the controller. Results showed that the exhibited stroke length tended to be longer than perceived, regardless of the interaction tool. While displaying the trajectory could help reduce directional and length errors. In Study 2, we further tested the effect of different visual references and found that, compared with an empty room or cluttered scenarios, providing only a virtual table yielded higher input precision and user preference. In Study 3, we repeated Study 2 in real dragging and scaling tasks and verified the generalizability of the findings in terms of input error. Our core finding is that the user's hand moves significantly longer than the task length due to the underestimation of stroke length, yet the error of the Z-axis movement is smaller than that of the X-axis and the Y-axis, and a simple virtual desktop can effectively reduce errors.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126369923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00065
M. Bellgardt, Sebastian Pape, David Gilbert, M. Prochnau, Georg König, T. Kuhlen
Teaching in optical systems design is usually performed on an optical bench. While experimentation plays an important role in education, experiments involving expensive or dangerous components are usually limited to short, heavily supervised sessions. Computer simulations, on the other hand, offer high accessibility, but suffer from reduced realism and tangibility when presented on a 2D screen. For this reason, we present the virtual optical bench, an application that lets users explore spherical lens layouts in virtual reality (VR). We implemented a numerically accurate simulation of optical systems using Nvidia OptiX, as well as a prototypical VR application, which we then evaluated in an expert review with 6 optics experts. Based on their feedback, we re-implemented our VR application in Unreal Engine 4. The re-implementation has since been actively used for teaching optical layouts, where we performed a qualitative evaluation with 18 students. We show that our virtual optical bench achieves good usability and is perceived to enhance the understanding of course contents.
{"title":"Virtual Optical Bench: Teaching Spherical Lens Layout in VR with Real-Time Ray Tracing","authors":"M. Bellgardt, Sebastian Pape, David Gilbert, M. Prochnau, Georg König, T. Kuhlen","doi":"10.1109/VR55154.2023.00065","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00065","url":null,"abstract":"Teaching in optical systems design is usually performed on an optical bench. While experimentation plays an important role in education, experiments involving expensive or dangerous components are usually limited to short, heavily supervised sessions. Computer simulations, on the other hand, offer high accessibility, but suffer from reduced realism and tangibility when presented on a 2D screen. For this reason, we present the virtual optical bench, an application that lets users explore spherical lens layouts in virtual reality (VR). We implemented a numerically accurate simulation of optical systems using Nvidia OptiX, as well as a prototypical VR application, which we then evaluated in an expert review with 6 optics experts. Based on their feedback, we re-implemented our VR application in Unreal Engine 4. The re-implementation has since been actively used for teaching optical layouts, where we performed a qualitative evaluation with 18 students. We show that our virtual optical bench achieves good usability and is perceived to enhance the understanding of course contents.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128046911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00070
Justine Saint-Aubert, F. Argelaguet, M. Macé, C. Pacchierotti, A. Amedi, A. Lécuyer
In Virtual Reality (VR), a growing number of applications involve verbal communications with avatars, such as for teleconference, entertainment, virtual training, social networks, etc. In this context, our paper aims to investigate how tactile feedback consisting in vibrations synchronized with speech could influence aspects related to VR social interactions such as persuasion, co-presence and leadership. We conducted two experiments where participants embody a first-person avatar attending a virtual meeting in immersive VR. In the first experiment, participants were listening to two speaking virtual agents and the speech of one agent was augmented with vibrotactile feedback. Interestingly, the results show that such vibrotactile feedback could significantly improve the perceived co-presence but also the persuasiveness and leadership of the haptically-augmented agent. In the second experiment, the participants were asked to speak to two agents, and their own speech was augmented or not with vibrotactile feedback. The results show that vibrotactile feedback had again a positive effect on co-presence, and that participants perceive their speech as more persuasive in presence of haptic feedback. Taken together, our results demonstrate the strong potential of haptic feedback for supporting social interactions in VR, and pave the way to novel usages of vibrations in a wide range of applications in which verbal communication plays a prominent role.
{"title":"Persuasive Vibrations: Effects of Speech-Based Vibrations on Persuasion, Leadership, and Co-Presence During Verbal Communication in VR","authors":"Justine Saint-Aubert, F. Argelaguet, M. Macé, C. Pacchierotti, A. Amedi, A. Lécuyer","doi":"10.1109/VR55154.2023.00070","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00070","url":null,"abstract":"In Virtual Reality (VR), a growing number of applications involve verbal communications with avatars, such as for teleconference, entertainment, virtual training, social networks, etc. In this context, our paper aims to investigate how tactile feedback consisting in vibrations synchronized with speech could influence aspects related to VR social interactions such as persuasion, co-presence and leadership. We conducted two experiments where participants embody a first-person avatar attending a virtual meeting in immersive VR. In the first experiment, participants were listening to two speaking virtual agents and the speech of one agent was augmented with vibrotactile feedback. Interestingly, the results show that such vibrotactile feedback could significantly improve the perceived co-presence but also the persuasiveness and leadership of the haptically-augmented agent. In the second experiment, the participants were asked to speak to two agents, and their own speech was augmented or not with vibrotactile feedback. The results show that vibrotactile feedback had again a positive effect on co-presence, and that participants perceive their speech as more persuasive in presence of haptic feedback. Taken together, our results demonstrate the strong potential of haptic feedback for supporting social interactions in VR, and pave the way to novel usages of vibrations in a wide range of applications in which verbal communication plays a prominent role.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"205 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131920473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00043
Yusuke Yamazaki, S. Hasegawa
In this study, we aim to improve the experience of virtual reality (VR) shooting games by employing a 3D haptic guidance method using necklace-type and belt-type haptic devices. Such devices help to modulate the vibrations generated by and synchronized with musical signals according to the azimuth and height of a target in 3D space, which is expected to improve the gaming experience by providing 3D guidance and enhancing the music-listening experience. For the first step, we evaluated the method's potential by conducting an experiment in which participants were asked to shoot a randomly spawned target moving in 3D VR space. The experiment applied four conditions: the proposed method (Haptic), displaying 3D radar (Vision) to represent the visualization method, no guidance (None), and a combination of Haptic and Vision (VisHap). Outcomes related to the success rate and accomplishment time (of the shooting task), the number of head rotations, and participant responses to a follow-up questionnaire revealed that Haptic performed significantly better than None but was inferior to Vision, indicating that the proposed method succeeded in terms of effectively providing 3D guidance. VisHap performed roughly as well as Vision and was preferred to other conditions in most cases, indicating the general usefulness of the proposed method. Meanwhile, the findings from the questionnaire suggest that although the modular vibrations improved the music-listening experience during the shooting task, the impact on the overall gaming experience is unclear. This warrants further research.
{"title":"Providing 3D Guidance and Improving the Music-Listening Experience in Virtual Reality Shooting Games Using Musical Vibrotactile Feedback","authors":"Yusuke Yamazaki, S. Hasegawa","doi":"10.1109/VR55154.2023.00043","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00043","url":null,"abstract":"In this study, we aim to improve the experience of virtual reality (VR) shooting games by employing a 3D haptic guidance method using necklace-type and belt-type haptic devices. Such devices help to modulate the vibrations generated by and synchronized with musical signals according to the azimuth and height of a target in 3D space, which is expected to improve the gaming experience by providing 3D guidance and enhancing the music-listening experience. For the first step, we evaluated the method's potential by conducting an experiment in which participants were asked to shoot a randomly spawned target moving in 3D VR space. The experiment applied four conditions: the proposed method (Haptic), displaying 3D radar (Vision) to represent the visualization method, no guidance (None), and a combination of Haptic and Vision (VisHap). Outcomes related to the success rate and accomplishment time (of the shooting task), the number of head rotations, and participant responses to a follow-up questionnaire revealed that Haptic performed significantly better than None but was inferior to Vision, indicating that the proposed method succeeded in terms of effectively providing 3D guidance. VisHap performed roughly as well as Vision and was preferred to other conditions in most cases, indicating the general usefulness of the proposed method. Meanwhile, the findings from the questionnaire suggest that although the modular vibrations improved the music-listening experience during the shooting task, the impact on the overall gaming experience is unclear. This warrants further research.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132650243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00052
Rawan Alghofaili, Cuong Nguyen, Vojtech Krs, N. Carr, R. Mech, L. Yu
Three-dimensional curve drawing in Augmented Reality (AR) enables users to create 3D curves that fit within the real-world scene. It has applications in 3D design, sculpting, and animation. However, the task complexity increases when the desirable path for the curve is obstructed by the physical environment or by what the camera can see. For example, it is difficult to draw a curve that wraps around an object or scales to out-of-reach places. We propose WARPY, an environment-aware 3D curve drawing tool for mobile AR. Our system enables users to draw freeform curves from a distance in AR by combining 2D-to-3D sketch inference with geometric proxies. Geometric Proxies can be obtained via 3D scanning or from a list of pre-defined primitives. WARPY also provides a multi-view mode to enable users to sketch a curve from multiple viewpoints, which is useful if the target curve cannot fit within the camera's field of view. We conducted two user studies and found that WARPY can be a viable tool to help users create complex and large curves in AR.
{"title":"WARPY: Sketching Environment-Aware 3D Curves in Mobile Augmented Reality","authors":"Rawan Alghofaili, Cuong Nguyen, Vojtech Krs, N. Carr, R. Mech, L. Yu","doi":"10.1109/VR55154.2023.00052","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00052","url":null,"abstract":"Three-dimensional curve drawing in Augmented Reality (AR) enables users to create 3D curves that fit within the real-world scene. It has applications in 3D design, sculpting, and animation. However, the task complexity increases when the desirable path for the curve is obstructed by the physical environment or by what the camera can see. For example, it is difficult to draw a curve that wraps around an object or scales to out-of-reach places. We propose WARPY, an environment-aware 3D curve drawing tool for mobile AR. Our system enables users to draw freeform curves from a distance in AR by combining 2D-to-3D sketch inference with geometric proxies. Geometric Proxies can be obtained via 3D scanning or from a list of pre-defined primitives. WARPY also provides a multi-view mode to enable users to sketch a curve from multiple viewpoints, which is useful if the target curve cannot fit within the camera's field of view. We conducted two user studies and found that WARPY can be a viable tool to help users create complex and large curves in AR.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124164668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00024
Riku Otono, Adélaïde Genay, Monica Perusquía-Hernández, N. Isoyama, H. Uchiyama, M. Hachet, A. Lécuyer, K. Kiyokawa
Virtual avatars are more and more often featured in Virtual Reality (VR) and Augmented Reality (AR) applications. When embodying a virtual avatar, one may desire to change of appearance over the course of the embodiment. However, switching suddenly from one appearance to another can break the continuity of the user experience and potentially impact the sense of embodiment (SoE), especially when the new appearance is very different. In this paper, we explore how applying smooth visual transitions at the moment of the change can help to maintain the SoE and benefit the general user experience. To address this, we implemented an AR system allowing users to embody a regular-shaped avatar that can be transformed into a muscular one through a visual effect. The avatar's transformation can be triggered either by the user through physical action (“active” transition), or automatically launched by the system (“passive” transition). We conducted a user study to evaluate the effects of these two types of transformations on the SoE by comparing them to control conditions where there was no visual feedback of the transformation. Our results show that changing the appearance of one's avatar with an active transition (with visual feedback), compared to a passive transition, helps to maintain the user's sense of agency, a component of the SoE. They also partially suggest that the Proteus effects experienced during the embodiment were enhanced by these transitions. Therefore, we conclude that visual effects controlled by the user when changing their avatar's appearance can benefit their experience by preserving the SoE and intensifying the Proteus effects.
{"title":"I'm Transforming! Effects of Visual Transitions to Change of Avatar on the Sense of Embodiment in AR","authors":"Riku Otono, Adélaïde Genay, Monica Perusquía-Hernández, N. Isoyama, H. Uchiyama, M. Hachet, A. Lécuyer, K. Kiyokawa","doi":"10.1109/VR55154.2023.00024","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00024","url":null,"abstract":"Virtual avatars are more and more often featured in Virtual Reality (VR) and Augmented Reality (AR) applications. When embodying a virtual avatar, one may desire to change of appearance over the course of the embodiment. However, switching suddenly from one appearance to another can break the continuity of the user experience and potentially impact the sense of embodiment (SoE), especially when the new appearance is very different. In this paper, we explore how applying smooth visual transitions at the moment of the change can help to maintain the SoE and benefit the general user experience. To address this, we implemented an AR system allowing users to embody a regular-shaped avatar that can be transformed into a muscular one through a visual effect. The avatar's transformation can be triggered either by the user through physical action (“active” transition), or automatically launched by the system (“passive” transition). We conducted a user study to evaluate the effects of these two types of transformations on the SoE by comparing them to control conditions where there was no visual feedback of the transformation. Our results show that changing the appearance of one's avatar with an active transition (with visual feedback), compared to a passive transition, helps to maintain the user's sense of agency, a component of the SoE. They also partially suggest that the Proteus effects experienced during the embodiment were enhanced by these transitions. Therefore, we conclude that visual effects controlled by the user when changing their avatar's appearance can benefit their experience by preserving the SoE and intensifying the Proteus effects.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131022164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00080
Mallesham Dasari, E. Lu, Michael W. Farb, Nuno Pereira, Ivan Liang, Anthony G. Rowe
Virtual Reality (VR) telepresence platforms are being challenged to support live performances, sporting events, and conferences with thousands of users across seamless virtual worlds. Current systems have struggled to meet these demands which has led to high-profile performance events with groups of users isolated in parallel sessions. The core difference in scaling VR environments compared to classic 2D video content delivery comes from the dynamic peer-to-peer spatial dependence on communication. Users have many pair-wise interactions that grow and shrink as they explore spaces. In this paper, we discuss the challenges of VR scaling and present an architecture that supports hundreds of users with spatial audio and video in a single virtual environment. We leverage the property of spatial locality with two key optimizations: (1) a Quality of Service (QoS) scheme to prioritize audio and video traffic based on users' locality, and (2) a resource manager that allocates client connections across multiple servers based on user proximity within the virtual world. Through real-world deployments and extensive evaluations under real and simulated environments, we demonstrate the scalability of our platform while showing improved QoS compared with existing approaches.
{"title":"Scaling VR Video Conferencing","authors":"Mallesham Dasari, E. Lu, Michael W. Farb, Nuno Pereira, Ivan Liang, Anthony G. Rowe","doi":"10.1109/VR55154.2023.00080","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00080","url":null,"abstract":"Virtual Reality (VR) telepresence platforms are being challenged to support live performances, sporting events, and conferences with thousands of users across seamless virtual worlds. Current systems have struggled to meet these demands which has led to high-profile performance events with groups of users isolated in parallel sessions. The core difference in scaling VR environments compared to classic 2D video content delivery comes from the dynamic peer-to-peer spatial dependence on communication. Users have many pair-wise interactions that grow and shrink as they explore spaces. In this paper, we discuss the challenges of VR scaling and present an architecture that supports hundreds of users with spatial audio and video in a single virtual environment. We leverage the property of spatial locality with two key optimizations: (1) a Quality of Service (QoS) scheme to prioritize audio and video traffic based on users' locality, and (2) a resource manager that allocates client connections across multiple servers based on user proximity within the virtual world. Through real-world deployments and extensive evaluations under real and simulated environments, we demonstrate the scalability of our platform while showing improved QoS compared with existing approaches.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115343181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}