Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00262
Elena Dzardanova, Vlasios Kasapakis
The present study drafts a simplified IVR user experience design model to guideline a preliminary evaluation of attention variance for semantically distinct elements. 27 participants (11 females) freely explored an interactive multi-user virtual setting, whilst equipped with full-body motion support and eye-tracking which procured attention duration measurements. Initial results confirm significant element attention discrepancy and provide the first indication toward a more detailed categorical organization of experience components for follow-up experimentation.
{"title":"Preliminary evaluation of an IVR user experience design model using eye-tracking attention measurements","authors":"Elena Dzardanova, Vlasios Kasapakis","doi":"10.1109/VRW55335.2022.00262","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00262","url":null,"abstract":"The present study drafts a simplified IVR user experience design model to guideline a preliminary evaluation of attention variance for semantically distinct elements. 27 participants (11 females) freely explored an interactive multi-user virtual setting, whilst equipped with full-body motion support and eye-tracking which procured attention duration measurements. Initial results confirm significant element attention discrepancy and provide the first indication toward a more detailed categorical organization of experience components for follow-up experimentation.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131616252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00121
Jen-Shuo Liu, B. Tversky, Steven K. Feiner
Precueing information about upcoming subtasks prior to performing them has the potential to make an entire task faster and easier to accomplish than cueing only the current subtask. Most AR and VR research on precueing has addressed path-following tasks requiring simple actions at a series of locations, such as pushing a button or just visiting that location. We present a testbed for exploring multi-level precueing in a richer task that requires the user to move their hand between specified locations, transporting an object between some of them, and rotating it to a designated orientation.
{"title":"A Testbed for Exploring Multi-Level Precueing in Augmented Reality","authors":"Jen-Shuo Liu, B. Tversky, Steven K. Feiner","doi":"10.1109/VRW55335.2022.00121","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00121","url":null,"abstract":"Precueing information about upcoming subtasks prior to performing them has the potential to make an entire task faster and easier to accomplish than cueing only the current subtask. Most AR and VR research on precueing has addressed path-following tasks requiring simple actions at a series of locations, such as pushing a button or just visiting that location. We present a testbed for exploring multi-level precueing in a richer task that requires the user to move their hand between specified locations, transporting an object between some of them, and rotating it to a designated orientation.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"213 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132232067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00250
Shihao Song, Yujia Wang, Wei Liang, Xiangyuan Li
We present a mixed reality (MR) system to help users with a houseful of furniture moving from an existing home into a new space, inheriting the preferences of furniture layout from the previous scene. With the RGB-D cameras mounted on a mixed reality device, Microsoft HoloLens 2, our system first reconstructs the 3D model of the ex-isting scene and leverages a deep learning-based approach to detect and to group objects, e.g., grouping the bed with nightstand. Then, our system generates a personalized furniture layout by optimizing a cost function, incorporating the analyzed relevance of between and within groups, and the spatial constraints of the new layout. The experiment results show that our system can transfer furniture layout to new spaces automatically, keeping the user's preferences well.
{"title":"Moving Soon? Rearranging Furniture using Mixed Reality","authors":"Shihao Song, Yujia Wang, Wei Liang, Xiangyuan Li","doi":"10.1109/VRW55335.2022.00250","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00250","url":null,"abstract":"We present a mixed reality (MR) system to help users with a houseful of furniture moving from an existing home into a new space, inheriting the preferences of furniture layout from the previous scene. With the RGB-D cameras mounted on a mixed reality device, Microsoft HoloLens 2, our system first reconstructs the 3D model of the ex-isting scene and leverages a deep learning-based approach to detect and to group objects, e.g., grouping the bed with nightstand. Then, our system generates a personalized furniture layout by optimizing a cost function, incorporating the analyzed relevance of between and within groups, and the spatial constraints of the new layout. The experiment results show that our system can transfer furniture layout to new spaces automatically, keeping the user's preferences well.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133095114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00101
Soojeong Yoo, A. Blandford
Augmented reality (AR) has shown much potential when applied in surgical settings, which can help guide surgeons through complex procedures, train students, and provide heads- up and hands-free spatial information. In this position paper, we discuss some of the current use cases of AR in surgical practice, evaluation measures, challenges and potential directions for future research. The aim of this paper is to start important discussion to improve future research and outcomes for system implementations for surgery.
{"title":"Augmented Reality and Surgery: Human Factors, Challenges, and Future Steps","authors":"Soojeong Yoo, A. Blandford","doi":"10.1109/VRW55335.2022.00101","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00101","url":null,"abstract":"Augmented reality (AR) has shown much potential when applied in surgical settings, which can help guide surgeons through complex procedures, train students, and provide heads- up and hands-free spatial information. In this position paper, we discuss some of the current use cases of AR in surgical practice, evaluation measures, challenges and potential directions for future research. The aim of this paper is to start important discussion to improve future research and outcomes for system implementations for surgery.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132121722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00019
Jonas Deuchler, Daniel Hepperle, Matthias Wölfel
We introduce the concept of asymmetric normalization, which refers to decoupling sensory self-perception from the perception of others in a shared virtual environment to present each user with a normalized version of the other users. This concept can be ap-plied to various avatar-related elements such as appearance, location, or non-verbal communication. For example, each participant in a polyadic virtual reality study can see other participants at an average height of the respective test population, while individual participants continue to see themselves embodied according to their actual height. We demonstrate in a pilot experiment how asymmetric normalization enables the acquisition of new information about social interactions and promises to reduce bias to promote replicability and external validity.
{"title":"Asymmetric Normalization in Social Virtual Reality Studies","authors":"Jonas Deuchler, Daniel Hepperle, Matthias Wölfel","doi":"10.1109/VRW55335.2022.00019","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00019","url":null,"abstract":"We introduce the concept of asymmetric normalization, which refers to decoupling sensory self-perception from the perception of others in a shared virtual environment to present each user with a normalized version of the other users. This concept can be ap-plied to various avatar-related elements such as appearance, location, or non-verbal communication. For example, each participant in a polyadic virtual reality study can see other participants at an average height of the respective test population, while individual participants continue to see themselves embodied according to their actual height. We demonstrate in a pilot experiment how asymmetric normalization enables the acquisition of new information about social interactions and promises to reduce bias to promote replicability and external validity.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123928095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00293
Kadir Lofca, Jason Haskins, J. Jerald, Regis Kopper
Our research focuses on how physical props in virtual reality (VR) can affect users' time perception. We designed an experiment with the goal of comparing users' perception of time when using physical props in VR as compared to standard controllers and only virtual elements. In order to quantify this effect, time estimates for both conditions are compared to time estimates for a matching real-world task. In this experiment, participants assume the role of a firefighter trainee, going through a HAZMAT scenario, where they touch and interact with different physical props that match the virtual elements of the scene.
{"title":"Studying the Effect of Physical Realism on Time Perception in a HAZMAT VR Simulation","authors":"Kadir Lofca, Jason Haskins, J. Jerald, Regis Kopper","doi":"10.1109/VRW55335.2022.00293","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00293","url":null,"abstract":"Our research focuses on how physical props in virtual reality (VR) can affect users' time perception. We designed an experiment with the goal of comparing users' perception of time when using physical props in VR as compared to standard controllers and only virtual elements. In order to quantify this effect, time estimates for both conditions are compared to time estimates for a matching real-world task. In this experiment, participants assume the role of a firefighter trainee, going through a HAZMAT scenario, where they touch and interact with different physical props that match the virtual elements of the scene.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124410936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00143
Hvuniin Lee, Woontack Woo
Providing additional information in the proper position of augmented reality (AR) head-mounted display (HMD) can help increase AR performance and usability for dual-task. Therefore, our study inves-tigated how to place notifications for the dual-task to address this. We compared eight display positions and two tasks (single and dual tasks) to identify the appropriate area for displaying notifications. We confirmed that the middle-right reduces response time and task load. In contrast, the top-left is the location, which should avoid providing any notification in AR dual-task. Our study contributes to designing AR notifications on HMDs to enhance everyday AR experiences.
{"title":"Investigating Display Position of a Head-Fixed Augmented Reality Notification for Dual-task","authors":"Hvuniin Lee, Woontack Woo","doi":"10.1109/VRW55335.2022.00143","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00143","url":null,"abstract":"Providing additional information in the proper position of augmented reality (AR) head-mounted display (HMD) can help increase AR performance and usability for dual-task. Therefore, our study inves-tigated how to place notifications for the dual-task to address this. We compared eight display positions and two tasks (single and dual tasks) to identify the appropriate area for displaying notifications. We confirmed that the middle-right reduces response time and task load. In contrast, the top-left is the location, which should avoid providing any notification in AR dual-task. Our study contributes to designing AR notifications on HMDs to enhance everyday AR experiences.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128743048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00244
Hiromu Koide, K. Kanari, Mie Sato
In recent years, augmented reality (AR) has started to be used in our daily lives. AR glasses are used when walking, which is a normal part of daily life, but walking causes the text displayed on the glasses to shake. This reduces both readability and our attention to what is in front of us, and increases discomfort. We propose a method of fixing the text to take account of shaking while walking to reduce these adverse effects. Experiments revealed the effectiveness of our reduction method and its influence on the distance of the text display.
{"title":"An Examination on Reduction of Displayed Character Shake while Walking in Place with AR Glasses","authors":"Hiromu Koide, K. Kanari, Mie Sato","doi":"10.1109/VRW55335.2022.00244","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00244","url":null,"abstract":"In recent years, augmented reality (AR) has started to be used in our daily lives. AR glasses are used when walking, which is a normal part of daily life, but walking causes the text displayed on the glasses to shake. This reduces both readability and our attention to what is in front of us, and increases discomfort. We propose a method of fixing the text to take account of shaking while walking to reduce these adverse effects. Experiments revealed the effectiveness of our reduction method and its influence on the distance of the text display.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129122338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00150
Linda Hirsch, Anna Haller, A. Butz, Ceenu George
Shared virtual environments (VEs) are challenged conveying and triggering users' feelings of social presence. Traces of use are implicit evidence of prior interactions that support social awareness in the real environment (RE). However, they have not been explored in VEs so far. We investigate the traces' effect on users' perception of asynchronous social presences in a within-subject study ($mathrm{N}=26$) by comparing the users' experience with and without traces. The traces significantly increased the feeling of social presence. We contribute an initial exploration of the traces of use concept in VE to design shared social spaces for long-term use.
{"title":"“What a Mess!”: Traces of Use to Increase Asynchronous Social Presence in Shared Virtual Environments","authors":"Linda Hirsch, Anna Haller, A. Butz, Ceenu George","doi":"10.1109/VRW55335.2022.00150","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00150","url":null,"abstract":"Shared virtual environments (VEs) are challenged conveying and triggering users' feelings of social presence. Traces of use are implicit evidence of prior interactions that support social awareness in the real environment (RE). However, they have not been explored in VEs so far. We investigate the traces' effect on users' perception of asynchronous social presences in a within-subject study ($mathrm{N}=26$) by comparing the users' experience with and without traces. The traces significantly increased the feeling of social presence. We contribute an initial exploration of the traces of use concept in VE to design shared social spaces for long-term use.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116718820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00026
David Moreno-Lumbreras, Jesus M. Gonzalez-Barahona, Andrea Villaverde
We present Babiaxr, a tool to analyze, extract and visualize data in an immersive virtual reality web environment, making use of the WebXR and WebGL standards, making VR visualizations more accessible. The approach focuses on the analysis of a specific visual-ization, called CodeCity, a well known city metaphor for visualizing source code metrics in a 3D environment.
{"title":"BabiaXR: Virtual Reality software data visualizations for the Web","authors":"David Moreno-Lumbreras, Jesus M. Gonzalez-Barahona, Andrea Villaverde","doi":"10.1109/VRW55335.2022.00026","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00026","url":null,"abstract":"We present Babiaxr, a tool to analyze, extract and visualize data in an immersive virtual reality web environment, making use of the WebXR and WebGL standards, making VR visualizations more accessible. The approach focuses on the analysis of a specific visual-ization, called CodeCity, a well known city metaphor for visualizing source code metrics in a 3D environment.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115242937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}