Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00063
F. G. Pratticò, Javad Alizadeh Shabkhoslati, Navid Shaghaghi, F. Lamberti
In this work, the use of an undercover conversational agent, acting as a participative student in a synchronous virtual reality distance learning scenario is proposed to stimulate social interaction between teacher and students. The outcome of an exploratory user study indicated that the undercover conversational agent is capable of fostering interaction, relieving social pressure, and overall leading to a more satisfactory and engaging learning experience without sacrificing learning performance.
{"title":"Bot Undercover: On the Use of Conversational Agents to Stimulate Teacher-Students Interaction in Remote Learning","authors":"F. G. Pratticò, Javad Alizadeh Shabkhoslati, Navid Shaghaghi, F. Lamberti","doi":"10.1109/VRW55335.2022.00063","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00063","url":null,"abstract":"In this work, the use of an undercover conversational agent, acting as a participative student in a synchronous virtual reality distance learning scenario is proposed to stimulate social interaction between teacher and students. The outcome of an exploratory user study indicated that the undercover conversational agent is capable of fostering interaction, relieving social pressure, and overall leading to a more satisfactory and engaging learning experience without sacrificing learning performance.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114576577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00181
Y. P. Gururaj, Raghav Mittal, Sai Anirudh Karre, Y. R. Reddy, Syed Azeemuddin
Locomotion in Virtual Reality (VR) acts as a motion tracking unit for simulating user movements based on the Degree-of-Freedom (DOF) of the application. For effective locomotion, VR practitioners may have to transform their hardware from 3-DOF to 6-DOF. In this context, we conducted a literature review on different motion tracking methods employed in the Head-Mounted-Devices (HMD) to understand such hardware transformation to conduct locomotion in VR. Our observations led us to formulate a taxonomy of the tracking methods for locomotion in VR based on system design. Our study also captures different metrics that VR practitioners use to evaluate the hardware based on the context, performance, and significance for conducting locomotion.
{"title":"Towards Conducting Effective Locomotion Through Hardware Transformation in Head-Mounted-Device - A Review Study","authors":"Y. P. Gururaj, Raghav Mittal, Sai Anirudh Karre, Y. R. Reddy, Syed Azeemuddin","doi":"10.1109/VRW55335.2022.00181","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00181","url":null,"abstract":"Locomotion in Virtual Reality (VR) acts as a motion tracking unit for simulating user movements based on the Degree-of-Freedom (DOF) of the application. For effective locomotion, VR practitioners may have to transform their hardware from 3-DOF to 6-DOF. In this context, we conducted a literature review on different motion tracking methods employed in the Head-Mounted-Devices (HMD) to understand such hardware transformation to conduct locomotion in VR. Our observations led us to formulate a taxonomy of the tracking methods for locomotion in VR based on system design. Our study also captures different metrics that VR practitioners use to evaluate the hardware based on the context, performance, and significance for conducting locomotion.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122546880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00314
Shu Wei
This PhD research aims to implement dynamic facial expressions on virtual humans and explore their potential to enhance the efficacy of virtual reality (VR) mental health therapy. A systematic review of virtual humans in mental health VR indicated that only around 10% of applications used dynamic facial expressions. The potential of virtual humans' emotion richness is understudied and it is unclear how the facial expressions affect the individuals in VR. Therefore, we will focus on understanding people's behavioural, physiological, and psychological reactions toward facial-animated humans in VR experimental studies. The first study examines whether particular non-verbal behaviours can enhance people's therapy engagement, by applying warm facial expressions and head nods on a virtual coach. Future experiments will further look at individuals' interpretations of facial expressions on virtual crowds and the virtual infant. This research will explore how best to use facial expressions to facilitate VR therapy through the practice of psychiatric research, VR programming and 3D animation.
{"title":"[DC] Dynamic facial expressions on virtual humans for virtual reality (VR) mental health therapy","authors":"Shu Wei","doi":"10.1109/VRW55335.2022.00314","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00314","url":null,"abstract":"This PhD research aims to implement dynamic facial expressions on virtual humans and explore their potential to enhance the efficacy of virtual reality (VR) mental health therapy. A systematic review of virtual humans in mental health VR indicated that only around 10% of applications used dynamic facial expressions. The potential of virtual humans' emotion richness is understudied and it is unclear how the facial expressions affect the individuals in VR. Therefore, we will focus on understanding people's behavioural, physiological, and psychological reactions toward facial-animated humans in VR experimental studies. The first study examines whether particular non-verbal behaviours can enhance people's therapy engagement, by applying warm facial expressions and head nods on a virtual coach. Future experiments will further look at individuals' interpretations of facial expressions on virtual crowds and the virtual infant. This research will explore how best to use facial expressions to facilitate VR therapy through the practice of psychiatric research, VR programming and 3D animation.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126855382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00262
Elena Dzardanova, Vlasios Kasapakis
The present study drafts a simplified IVR user experience design model to guideline a preliminary evaluation of attention variance for semantically distinct elements. 27 participants (11 females) freely explored an interactive multi-user virtual setting, whilst equipped with full-body motion support and eye-tracking which procured attention duration measurements. Initial results confirm significant element attention discrepancy and provide the first indication toward a more detailed categorical organization of experience components for follow-up experimentation.
{"title":"Preliminary evaluation of an IVR user experience design model using eye-tracking attention measurements","authors":"Elena Dzardanova, Vlasios Kasapakis","doi":"10.1109/VRW55335.2022.00262","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00262","url":null,"abstract":"The present study drafts a simplified IVR user experience design model to guideline a preliminary evaluation of attention variance for semantically distinct elements. 27 participants (11 females) freely explored an interactive multi-user virtual setting, whilst equipped with full-body motion support and eye-tracking which procured attention duration measurements. Initial results confirm significant element attention discrepancy and provide the first indication toward a more detailed categorical organization of experience components for follow-up experimentation.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131616252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00019
Jonas Deuchler, Daniel Hepperle, Matthias Wölfel
We introduce the concept of asymmetric normalization, which refers to decoupling sensory self-perception from the perception of others in a shared virtual environment to present each user with a normalized version of the other users. This concept can be ap-plied to various avatar-related elements such as appearance, location, or non-verbal communication. For example, each participant in a polyadic virtual reality study can see other participants at an average height of the respective test population, while individual participants continue to see themselves embodied according to their actual height. We demonstrate in a pilot experiment how asymmetric normalization enables the acquisition of new information about social interactions and promises to reduce bias to promote replicability and external validity.
{"title":"Asymmetric Normalization in Social Virtual Reality Studies","authors":"Jonas Deuchler, Daniel Hepperle, Matthias Wölfel","doi":"10.1109/VRW55335.2022.00019","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00019","url":null,"abstract":"We introduce the concept of asymmetric normalization, which refers to decoupling sensory self-perception from the perception of others in a shared virtual environment to present each user with a normalized version of the other users. This concept can be ap-plied to various avatar-related elements such as appearance, location, or non-verbal communication. For example, each participant in a polyadic virtual reality study can see other participants at an average height of the respective test population, while individual participants continue to see themselves embodied according to their actual height. We demonstrate in a pilot experiment how asymmetric normalization enables the acquisition of new information about social interactions and promises to reduce bias to promote replicability and external validity.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123928095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00293
Kadir Lofca, Jason Haskins, J. Jerald, Regis Kopper
Our research focuses on how physical props in virtual reality (VR) can affect users' time perception. We designed an experiment with the goal of comparing users' perception of time when using physical props in VR as compared to standard controllers and only virtual elements. In order to quantify this effect, time estimates for both conditions are compared to time estimates for a matching real-world task. In this experiment, participants assume the role of a firefighter trainee, going through a HAZMAT scenario, where they touch and interact with different physical props that match the virtual elements of the scene.
{"title":"Studying the Effect of Physical Realism on Time Perception in a HAZMAT VR Simulation","authors":"Kadir Lofca, Jason Haskins, J. Jerald, Regis Kopper","doi":"10.1109/VRW55335.2022.00293","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00293","url":null,"abstract":"Our research focuses on how physical props in virtual reality (VR) can affect users' time perception. We designed an experiment with the goal of comparing users' perception of time when using physical props in VR as compared to standard controllers and only virtual elements. In order to quantify this effect, time estimates for both conditions are compared to time estimates for a matching real-world task. In this experiment, participants assume the role of a firefighter trainee, going through a HAZMAT scenario, where they touch and interact with different physical props that match the virtual elements of the scene.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124410936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00143
Hvuniin Lee, Woontack Woo
Providing additional information in the proper position of augmented reality (AR) head-mounted display (HMD) can help increase AR performance and usability for dual-task. Therefore, our study inves-tigated how to place notifications for the dual-task to address this. We compared eight display positions and two tasks (single and dual tasks) to identify the appropriate area for displaying notifications. We confirmed that the middle-right reduces response time and task load. In contrast, the top-left is the location, which should avoid providing any notification in AR dual-task. Our study contributes to designing AR notifications on HMDs to enhance everyday AR experiences.
{"title":"Investigating Display Position of a Head-Fixed Augmented Reality Notification for Dual-task","authors":"Hvuniin Lee, Woontack Woo","doi":"10.1109/VRW55335.2022.00143","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00143","url":null,"abstract":"Providing additional information in the proper position of augmented reality (AR) head-mounted display (HMD) can help increase AR performance and usability for dual-task. Therefore, our study inves-tigated how to place notifications for the dual-task to address this. We compared eight display positions and two tasks (single and dual tasks) to identify the appropriate area for displaying notifications. We confirmed that the middle-right reduces response time and task load. In contrast, the top-left is the location, which should avoid providing any notification in AR dual-task. Our study contributes to designing AR notifications on HMDs to enhance everyday AR experiences.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128743048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00244
Hiromu Koide, K. Kanari, Mie Sato
In recent years, augmented reality (AR) has started to be used in our daily lives. AR glasses are used when walking, which is a normal part of daily life, but walking causes the text displayed on the glasses to shake. This reduces both readability and our attention to what is in front of us, and increases discomfort. We propose a method of fixing the text to take account of shaking while walking to reduce these adverse effects. Experiments revealed the effectiveness of our reduction method and its influence on the distance of the text display.
{"title":"An Examination on Reduction of Displayed Character Shake while Walking in Place with AR Glasses","authors":"Hiromu Koide, K. Kanari, Mie Sato","doi":"10.1109/VRW55335.2022.00244","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00244","url":null,"abstract":"In recent years, augmented reality (AR) has started to be used in our daily lives. AR glasses are used when walking, which is a normal part of daily life, but walking causes the text displayed on the glasses to shake. This reduces both readability and our attention to what is in front of us, and increases discomfort. We propose a method of fixing the text to take account of shaking while walking to reduce these adverse effects. Experiments revealed the effectiveness of our reduction method and its influence on the distance of the text display.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129122338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00150
Linda Hirsch, Anna Haller, A. Butz, Ceenu George
Shared virtual environments (VEs) are challenged conveying and triggering users' feelings of social presence. Traces of use are implicit evidence of prior interactions that support social awareness in the real environment (RE). However, they have not been explored in VEs so far. We investigate the traces' effect on users' perception of asynchronous social presences in a within-subject study ($mathrm{N}=26$) by comparing the users' experience with and without traces. The traces significantly increased the feeling of social presence. We contribute an initial exploration of the traces of use concept in VE to design shared social spaces for long-term use.
{"title":"“What a Mess!”: Traces of Use to Increase Asynchronous Social Presence in Shared Virtual Environments","authors":"Linda Hirsch, Anna Haller, A. Butz, Ceenu George","doi":"10.1109/VRW55335.2022.00150","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00150","url":null,"abstract":"Shared virtual environments (VEs) are challenged conveying and triggering users' feelings of social presence. Traces of use are implicit evidence of prior interactions that support social awareness in the real environment (RE). However, they have not been explored in VEs so far. We investigate the traces' effect on users' perception of asynchronous social presences in a within-subject study ($mathrm{N}=26$) by comparing the users' experience with and without traces. The traces significantly increased the feeling of social presence. We contribute an initial exploration of the traces of use concept in VE to design shared social spaces for long-term use.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116718820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00026
David Moreno-Lumbreras, Jesus M. Gonzalez-Barahona, Andrea Villaverde
We present Babiaxr, a tool to analyze, extract and visualize data in an immersive virtual reality web environment, making use of the WebXR and WebGL standards, making VR visualizations more accessible. The approach focuses on the analysis of a specific visual-ization, called CodeCity, a well known city metaphor for visualizing source code metrics in a 3D environment.
{"title":"BabiaXR: Virtual Reality software data visualizations for the Web","authors":"David Moreno-Lumbreras, Jesus M. Gonzalez-Barahona, Andrea Villaverde","doi":"10.1109/VRW55335.2022.00026","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00026","url":null,"abstract":"We present Babiaxr, a tool to analyze, extract and visualize data in an immersive virtual reality web environment, making use of the WebXR and WebGL standards, making VR visualizations more accessible. The approach focuses on the analysis of a specific visual-ization, called CodeCity, a well known city metaphor for visualizing source code metrics in a 3D environment.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115242937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}