Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00169
Christian Hirt, Marco Ketzel, Philip Graf, Christian Holz, A. Kunz
Redirected Walking (RDW) shrinks large virtual environments to fit small physical tracking spaces while supporting natural locomotion. Particularly in predictive RDW, one of the core concepts of RDW, algorithms rely on predicting users' future paths to adjust the induced redirection, which manipulates users' perception to deviate their physical paths from the intended virtual paths. Current path predictions either assume drastic simplifications or build on complex human locomotion models, which are inappropriate for real-time planning and thus not usable for RDW. Further, adapting existing predictive RDW algorithms to unconstrained open space exponentially increases their computational complexity, so that they are not applicable in real-time. In this work-in-progress paper, we discuss the currently prevalent issues of path prediction in RDW and propose simple yet flexible path prediction models that support dynamic virtual open spaces. Our proposed prediction models consist of two shapes: a drop shape represented by the lemniscate of Bernoulli and a sector shape. They define an area, in which linear and clothoidic walking trajectories will be investigated.
{"title":"Heuristic Short-term Path Prediction for Spontaneous Human Locomotion in Virtual Open Spaces","authors":"Christian Hirt, Marco Ketzel, Philip Graf, Christian Holz, A. Kunz","doi":"10.1109/VRW55335.2022.00169","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00169","url":null,"abstract":"Redirected Walking (RDW) shrinks large virtual environments to fit small physical tracking spaces while supporting natural locomotion. Particularly in predictive RDW, one of the core concepts of RDW, algorithms rely on predicting users' future paths to adjust the induced redirection, which manipulates users' perception to deviate their physical paths from the intended virtual paths. Current path predictions either assume drastic simplifications or build on complex human locomotion models, which are inappropriate for real-time planning and thus not usable for RDW. Further, adapting existing predictive RDW algorithms to unconstrained open space exponentially increases their computational complexity, so that they are not applicable in real-time. In this work-in-progress paper, we discuss the currently prevalent issues of path prediction in RDW and propose simple yet flexible path prediction models that support dynamic virtual open spaces. Our proposed prediction models consist of two shapes: a drop shape represented by the lemniscate of Bernoulli and a sector shape. They define an area, in which linear and clothoidic walking trajectories will be investigated.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124379355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00221
Qiaoge Li, Itsuki Ueda, Chun Xie, Hidehiko Shishido, I. Kitahara
This paper proposes a method using only RGB information from multiple captured panoramas to provide an immersive observing experience for real scenes. We generated an omnidirectional neural radiance field by adopting the Fibonacci sphere model for sampling rays and several optimized positional encoding approaches. We tested our method on synthetic and real scenes and achieved satisfying empirical performance. Our result makes the immersive continuous free-viewpoint experience possible.
{"title":"Omnidirectional Neural Radiance Field for Immersive Experience","authors":"Qiaoge Li, Itsuki Ueda, Chun Xie, Hidehiko Shishido, I. Kitahara","doi":"10.1109/VRW55335.2022.00221","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00221","url":null,"abstract":"This paper proposes a method using only RGB information from multiple captured panoramas to provide an immersive observing experience for real scenes. We generated an omnidirectional neural radiance field by adopting the Fibonacci sphere model for sampling rays and several optimized positional encoding approaches. We tested our method on synthetic and real scenes and achieved satisfying empirical performance. Our result makes the immersive continuous free-viewpoint experience possible.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124381307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/vrw55335.2022.00003
{"title":"[Copyright notice]","authors":"","doi":"10.1109/vrw55335.2022.00003","DOIUrl":"https://doi.org/10.1109/vrw55335.2022.00003","url":null,"abstract":"","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127978571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00223
Karina LaRubbio, Jeremiah Wright, Brendan David-John, A. Enqvist, Eakta Jain
Behavior-based authentication methods are actively being developed for XR. In particular, gaze-based methods promise continuous au-thentication of remote users. However, gaze behavior depends on the task being performed. Identification rate is typically highest when comparing data from the same task. In this study, we compared authentication performance using VR gaze data during random dot viewing, 360-degree image viewing, and a nuclear training simu-lation. We found that within-task authentication performed best for image viewing (72%). The implication for practitioners is to integrate image viewing into a VR workflow to collect gaze data that is viable for authentication.
{"title":"Who do you look like? - Gaze-based authentication for workers in VR","authors":"Karina LaRubbio, Jeremiah Wright, Brendan David-John, A. Enqvist, Eakta Jain","doi":"10.1109/VRW55335.2022.00223","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00223","url":null,"abstract":"Behavior-based authentication methods are actively being developed for XR. In particular, gaze-based methods promise continuous au-thentication of remote users. However, gaze behavior depends on the task being performed. Identification rate is typically highest when comparing data from the same task. In this study, we compared authentication performance using VR gaze data during random dot viewing, 360-degree image viewing, and a nuclear training simu-lation. We found that within-task authentication performed best for image viewing (72%). The implication for practitioners is to integrate image viewing into a VR workflow to collect gaze data that is viable for authentication.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127354525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00340
Jhasmani Tito, Tânia Basso, Regina L. O. Moraes
Virtual Reality (VR) is one of the new educational technologies that has an imperative importance due to the possibilities that it offers, such as bringing hands-on experiences for learning physics phenomena. This demo report is a serious-game, based on VR, and focused on teaching and learning of specific concepts of kinematics. The game is intended to deliver an immersive experience in which the student has an active role and whose game design includes theoretical concepts to maintain engagement throughout the tasks.
{"title":"ORUN - A Virtual reality serious-game for kinematics learning","authors":"Jhasmani Tito, Tânia Basso, Regina L. O. Moraes","doi":"10.1109/VRW55335.2022.00340","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00340","url":null,"abstract":"Virtual Reality (VR) is one of the new educational technologies that has an imperative importance due to the possibilities that it offers, such as bringing hands-on experiences for learning physics phenomena. This demo report is a serious-game, based on VR, and focused on teaching and learning of specific concepts of kinematics. The game is intended to deliver an immersive experience in which the student has an active role and whose game design includes theoretical concepts to maintain engagement throughout the tasks.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132628433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00082
Wanwan Li, B. Esmaeili, L. Yu
The growth of the wind energy industry in the United States has been remarkable; however, despite its significance and installation capacity, wind energy investments such as wind turbines and wind farms involve various safety risks. To increase awareness of construction workers regarding hazards associated, one needs to develop engaging training programs. Therefore, to address this emergent need, we develop a realistic simulation of the wind tower construction process in an immersive virtual reality environment aiming at informing workers of the general safety and health hazards associated with the critical processes used in constructing, maintaining, and demolishing wind towers.
{"title":"Simulating Wind Tower Construction Process for Virtual Construction Safety Training and Active Learning","authors":"Wanwan Li, B. Esmaeili, L. Yu","doi":"10.1109/VRW55335.2022.00082","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00082","url":null,"abstract":"The growth of the wind energy industry in the United States has been remarkable; however, despite its significance and installation capacity, wind energy investments such as wind turbines and wind farms involve various safety risks. To increase awareness of construction workers regarding hazards associated, one needs to develop engaging training programs. Therefore, to address this emergent need, we develop a realistic simulation of the wind tower construction process in an immersive virtual reality environment aiming at informing workers of the general safety and health hazards associated with the critical processes used in constructing, maintaining, and demolishing wind towers.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133725399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00078
Yichen Wang, Charles Martin
We report on designing a sound synthesis interface for a head-mounted augmented reality environment. The increased accessibility of augmented reality (AR) devices has incentivised the exploration of sound applications for music performance in computer music and other relevant communities. However, interaction affordances vary based on the specific AR device and thus implies different design considerations. In this poster, we present different interface prototypes for a frequency modulation synthesis system in the Microsoft HoloLens 2 and report our insights during the process of their developments.
{"title":"Designing Sound Synthesis Interfaces for Head-mounted Augmented Reality","authors":"Yichen Wang, Charles Martin","doi":"10.1109/VRW55335.2022.00078","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00078","url":null,"abstract":"We report on designing a sound synthesis interface for a head-mounted augmented reality environment. The increased accessibility of augmented reality (AR) devices has incentivised the exploration of sound applications for music performance in computer music and other relevant communities. However, interaction affordances vary based on the specific AR device and thus implies different design considerations. In this poster, we present different interface prototypes for a frequency modulation synthesis system in the Microsoft HoloLens 2 and report our insights during the process of their developments.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"47 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116899966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00200
Hoijoon Jung, Younhyun Jung, Jinman Kim
Direct volume rendering (DVR) is a standard technique for visualizing scientific volumetric data in three-dimension (3D). Utilizing current mixed reality head-mounted displays (MR-HMDs), the DVR can be displayed as a 3D hologram that can be superimposed on the original ‘physical’ object, offering supplementary x-ray visions showing its interior features. These MR-MHDs are stimulating innovations in a range of scientific application fields, yet their capabilities on DVR have yet to be thoroughly investigated. In this study, we explore a key requirement of rendering latency capability for MR - HMDs by proposing a benchmark application with 5 volumes and 30 rendering parameter variations.
{"title":"Understanding the Capabilities of the HoloLens 1 and 2 in a Mixed Reality Environment for Direct Volume Rendering with a Ray-casting Algorithm","authors":"Hoijoon Jung, Younhyun Jung, Jinman Kim","doi":"10.1109/VRW55335.2022.00200","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00200","url":null,"abstract":"Direct volume rendering (DVR) is a standard technique for visualizing scientific volumetric data in three-dimension (3D). Utilizing current mixed reality head-mounted displays (MR-HMDs), the DVR can be displayed as a 3D hologram that can be superimposed on the original ‘physical’ object, offering supplementary x-ray visions showing its interior features. These MR-MHDs are stimulating innovations in a range of scientific application fields, yet their capabilities on DVR have yet to be thoroughly investigated. In this study, we explore a key requirement of rendering latency capability for MR - HMDs by proposing a benchmark application with 5 volumes and 30 rendering parameter variations.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131034905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00195
Powen Yao, Yu Hou, Yuan He, Da Cheng, Huanpu Hu, Michael Zyda
In this work, we propose a multi-modal approach to manipulate smart home devices in a smart home environment simulated in virtual reality (VR). We determine the user's target device and the desired action by their utterance, spatial information (gestures, positions, etc.), or a combination of the two. Since the information contained in the user's utterance and the spatial information can be disjoint or complementary to each other, we process the two sources of information in parallel using our array of machine learning models. We use ensemble modeling to aggregate the results of these models and enhance the quality of our final prediction results. We present our preliminary architecture, models, and findings.
{"title":"Toward Using Multi-Modal Machine Learning for User Behavior Prediction in Simulated Smart Home for Extended Reality","authors":"Powen Yao, Yu Hou, Yuan He, Da Cheng, Huanpu Hu, Michael Zyda","doi":"10.1109/VRW55335.2022.00195","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00195","url":null,"abstract":"In this work, we propose a multi-modal approach to manipulate smart home devices in a smart home environment simulated in virtual reality (VR). We determine the user's target device and the desired action by their utterance, spatial information (gestures, positions, etc.), or a combination of the two. Since the information contained in the user's utterance and the spatial information can be disjoint or complementary to each other, we process the two sources of information in parallel using our array of machine learning models. We use ensemble modeling to aggregate the results of these models and enhance the quality of our final prediction results. We present our preliminary architecture, models, and findings.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131511674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00057
Yongkang Xing, J. Shell, Conor Fahy, Congyuan Wen, Zheng Da, Ho-Yan Kwan
With computer technology rapidly expanding out in recent years, there is a significant trend that Virtual Reality, Augmented Reality, and Mixed Reality technologies turn into the public. The paper describes Web Extended Reality (XR) and its current circumstance. The research discussed the advantages of web componentization and Page Builder System, which is the famous framework for web componentization. Furthermore, the research analyzes the characteristic of XR. The research designs the Web XR User Interface principles with XR characteristics and componentization design. The principles cover three aspects include main content, scrollbar, and navigation. The research develops the prototype to examine the concept. The prototype shows that the UI principles can provide an immersive user experience. The paper indicates the possible future view based on the design study.
{"title":"Web XR User Interface Study in Designing 3D Layout Framework in Static Websites","authors":"Yongkang Xing, J. Shell, Conor Fahy, Congyuan Wen, Zheng Da, Ho-Yan Kwan","doi":"10.1109/VRW55335.2022.00057","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00057","url":null,"abstract":"With computer technology rapidly expanding out in recent years, there is a significant trend that Virtual Reality, Augmented Reality, and Mixed Reality technologies turn into the public. The paper describes Web Extended Reality (XR) and its current circumstance. The research discussed the advantages of web componentization and Page Builder System, which is the famous framework for web componentization. Furthermore, the research analyzes the characteristic of XR. The research designs the Web XR User Interface principles with XR characteristics and componentization design. The principles cover three aspects include main content, scrollbar, and navigation. The research develops the prototype to examine the concept. The prototype shows that the UI principles can provide an immersive user experience. The paper indicates the possible future view based on the design study.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128162324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}