Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00300
Sinéad Barnett, Ian Mills, Frances Cleary
Preliminary research study to evaluate the potential and need for virtual reality in the mental health sector specifically focusing on the treatment of agoraphobia. A survey was sent to numerous partici-pants that have been diagnosed and currently receiving treatment for agoraphobia. Results have concluded there is a demand for virtual reality treatment for agoraphobia and this in turn can lead to future studies into the VR therapy.
{"title":"Investigation of the potential use of Virtual Reality for Agoraphobia exposure therapy","authors":"Sinéad Barnett, Ian Mills, Frances Cleary","doi":"10.1109/VRW55335.2022.00300","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00300","url":null,"abstract":"Preliminary research study to evaluate the potential and need for virtual reality in the mental health sector specifically focusing on the treatment of agoraphobia. A survey was sent to numerous partici-pants that have been diagnosed and currently receiving treatment for agoraphobia. Results have concluded there is a demand for virtual reality treatment for agoraphobia and this in turn can lead to future studies into the VR therapy.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134231347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00017
Luis Quintero, P. Papapetrou, J. Muñoz, Jeroen de Mooij, Michael Gaebler
This article explains the new features of the Excite-O-Meter, an open-source tool that enables the collection of bodily data, real-time feature extraction, and post-session data visualization in any custom VR environment developed in Unity. Besides analyzing heart activity, the tool supports now multidimensional time series to study motion trajectories in VR. The paper presents the main functionalities and discusses the relevance of the tool for behavioral and psychophysiological research.
{"title":"Excite-O-Meter: an Open-Source Unity Plugin to Analyze Heart Activity and Movement Trajectories in Custom VR Environments","authors":"Luis Quintero, P. Papapetrou, J. Muñoz, Jeroen de Mooij, Michael Gaebler","doi":"10.1109/VRW55335.2022.00017","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00017","url":null,"abstract":"This article explains the new features of the Excite-O-Meter, an open-source tool that enables the collection of bodily data, real-time feature extraction, and post-session data visualization in any custom VR environment developed in Unity. Besides analyzing heart activity, the tool supports now multidimensional time series to study motion trajectories in VR. The paper presents the main functionalities and discusses the relevance of the tool for behavioral and psychophysiological research.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134280220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00054
L. Gerry, M. Billinghurst, E. Broadbent
This scoping review identifies and summarizes previous research exploring the efficacy of Virtual Reality (VR) training for empathy and compassion. We clarify working definitions of empathy and compassion by breaking the constructs into emotional, cognitive, and behavioral components. These components correspond to three key design features of immersive VR technologies: biofeedback, perspective-taking, and simulation. Although previous reviews address techniques for empathy-enhancement in VR, there is no comprehensive exploration of the topic of empathy training in VR. This paper presents findings on VR empathy training to date, research gaps, and recommendations for future research.
{"title":"Empathic Skills Training in Virtual Reality: A Scoping Review","authors":"L. Gerry, M. Billinghurst, E. Broadbent","doi":"10.1109/VRW55335.2022.00054","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00054","url":null,"abstract":"This scoping review identifies and summarizes previous research exploring the efficacy of Virtual Reality (VR) training for empathy and compassion. We clarify working definitions of empathy and compassion by breaking the constructs into emotional, cognitive, and behavioral components. These components correspond to three key design features of immersive VR technologies: biofeedback, perspective-taking, and simulation. Although previous reviews address techniques for empathy-enhancement in VR, there is no comprehensive exploration of the topic of empathy training in VR. This paper presents findings on VR empathy training to date, research gaps, and recommendations for future research.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133386179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00166
Mengya Zheng, Xingyu Pan, Nestor Velasco Bermeo, R. Thomas, D. Coyle, G. O’hare, A. Campbell
The Internet of Things facilitates real-time decision support within smart environments. Augmented Reality allows for the ubiquitous visualization of IoT-derived data, and AR visualization will simultaneously permit the cognitive and visual binding of information to the physical object(s) to which they pertain. Essential questions exist about efficiently filtering, prioritizing, determining relevance, and adjudicating individual information needs in real-time decision-making. Therefore, this paper proposes a novel AR decision support framework (STARE) to support immediate decisions within a smart environment by augmenting the user's focal objects with assemblies of semantically relevant IoT data and corresponding suggestions.
{"title":"STARE: Semantic Augmented Reality Decision Support in Smart Environments","authors":"Mengya Zheng, Xingyu Pan, Nestor Velasco Bermeo, R. Thomas, D. Coyle, G. O’hare, A. Campbell","doi":"10.1109/VRW55335.2022.00166","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00166","url":null,"abstract":"The Internet of Things facilitates real-time decision support within smart environments. Augmented Reality allows for the ubiquitous visualization of IoT-derived data, and AR visualization will simultaneously permit the cognitive and visual binding of information to the physical object(s) to which they pertain. Essential questions exist about efficiently filtering, prioritizing, determining relevance, and adjudicating individual information needs in real-time decision-making. Therefore, this paper proposes a novel AR decision support framework (STARE) to support immediate decisions within a smart environment by augmenting the user's focal objects with assemblies of semantically relevant IoT data and corresponding suggestions.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133834397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00167
Bin Cheng, Junli Zhao, Fuqing Duan
Material reflectance property modeling can be used in realistic ren-dering to generate realistic appearances for virtual objects. However, current works mainly focus on near plane objects. In this paper, we propose an end-to-end network framework with attention mecha-nism to estimate the reflectance properties of any 3D object surface from a single image, where a single attention module is used for each reflectance property respectively to learn the property specific features. We also generate a material dataset by rendering a set of 3D complex shape models. The dataset is suitable for reflectance property estimation of arbitrary complex shape objects. Experiments validate the proposed method.
{"title":"Material Reflectance Property Estimation of Complex Objects Using an Attention Network","authors":"Bin Cheng, Junli Zhao, Fuqing Duan","doi":"10.1109/VRW55335.2022.00167","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00167","url":null,"abstract":"Material reflectance property modeling can be used in realistic ren-dering to generate realistic appearances for virtual objects. However, current works mainly focus on near plane objects. In this paper, we propose an end-to-end network framework with attention mecha-nism to estimate the reflectance properties of any 3D object surface from a single image, where a single attention module is used for each reflectance property respectively to learn the property specific features. We also generate a material dataset by rendering a set of 3D complex shape models. The dataset is suitable for reflectance property estimation of arbitrary complex shape objects. Experiments validate the proposed method.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134059217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00221
Qiaoge Li, Itsuki Ueda, Chun Xie, Hidehiko Shishido, I. Kitahara
This paper proposes a method using only RGB information from multiple captured panoramas to provide an immersive observing experience for real scenes. We generated an omnidirectional neural radiance field by adopting the Fibonacci sphere model for sampling rays and several optimized positional encoding approaches. We tested our method on synthetic and real scenes and achieved satisfying empirical performance. Our result makes the immersive continuous free-viewpoint experience possible.
{"title":"Omnidirectional Neural Radiance Field for Immersive Experience","authors":"Qiaoge Li, Itsuki Ueda, Chun Xie, Hidehiko Shishido, I. Kitahara","doi":"10.1109/VRW55335.2022.00221","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00221","url":null,"abstract":"This paper proposes a method using only RGB information from multiple captured panoramas to provide an immersive observing experience for real scenes. We generated an omnidirectional neural radiance field by adopting the Fibonacci sphere model for sampling rays and several optimized positional encoding approaches. We tested our method on synthetic and real scenes and achieved satisfying empirical performance. Our result makes the immersive continuous free-viewpoint experience possible.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124381307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00195
Powen Yao, Yu Hou, Yuan He, Da Cheng, Huanpu Hu, Michael Zyda
In this work, we propose a multi-modal approach to manipulate smart home devices in a smart home environment simulated in virtual reality (VR). We determine the user's target device and the desired action by their utterance, spatial information (gestures, positions, etc.), or a combination of the two. Since the information contained in the user's utterance and the spatial information can be disjoint or complementary to each other, we process the two sources of information in parallel using our array of machine learning models. We use ensemble modeling to aggregate the results of these models and enhance the quality of our final prediction results. We present our preliminary architecture, models, and findings.
{"title":"Toward Using Multi-Modal Machine Learning for User Behavior Prediction in Simulated Smart Home for Extended Reality","authors":"Powen Yao, Yu Hou, Yuan He, Da Cheng, Huanpu Hu, Michael Zyda","doi":"10.1109/VRW55335.2022.00195","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00195","url":null,"abstract":"In this work, we propose a multi-modal approach to manipulate smart home devices in a smart home environment simulated in virtual reality (VR). We determine the user's target device and the desired action by their utterance, spatial information (gestures, positions, etc.), or a combination of the two. Since the information contained in the user's utterance and the spatial information can be disjoint or complementary to each other, we process the two sources of information in parallel using our array of machine learning models. We use ensemble modeling to aggregate the results of these models and enhance the quality of our final prediction results. We present our preliminary architecture, models, and findings.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131511674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00101
Soojeong Yoo, A. Blandford
Augmented reality (AR) has shown much potential when applied in surgical settings, which can help guide surgeons through complex procedures, train students, and provide heads- up and hands-free spatial information. In this position paper, we discuss some of the current use cases of AR in surgical practice, evaluation measures, challenges and potential directions for future research. The aim of this paper is to start important discussion to improve future research and outcomes for system implementations for surgery.
{"title":"Augmented Reality and Surgery: Human Factors, Challenges, and Future Steps","authors":"Soojeong Yoo, A. Blandford","doi":"10.1109/VRW55335.2022.00101","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00101","url":null,"abstract":"Augmented reality (AR) has shown much potential when applied in surgical settings, which can help guide surgeons through complex procedures, train students, and provide heads- up and hands-free spatial information. In this position paper, we discuss some of the current use cases of AR in surgical practice, evaluation measures, challenges and potential directions for future research. The aim of this paper is to start important discussion to improve future research and outcomes for system implementations for surgery.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132121722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00082
Wanwan Li, B. Esmaeili, L. Yu
The growth of the wind energy industry in the United States has been remarkable; however, despite its significance and installation capacity, wind energy investments such as wind turbines and wind farms involve various safety risks. To increase awareness of construction workers regarding hazards associated, one needs to develop engaging training programs. Therefore, to address this emergent need, we develop a realistic simulation of the wind tower construction process in an immersive virtual reality environment aiming at informing workers of the general safety and health hazards associated with the critical processes used in constructing, maintaining, and demolishing wind towers.
{"title":"Simulating Wind Tower Construction Process for Virtual Construction Safety Training and Active Learning","authors":"Wanwan Li, B. Esmaeili, L. Yu","doi":"10.1109/VRW55335.2022.00082","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00082","url":null,"abstract":"The growth of the wind energy industry in the United States has been remarkable; however, despite its significance and installation capacity, wind energy investments such as wind turbines and wind farms involve various safety risks. To increase awareness of construction workers regarding hazards associated, one needs to develop engaging training programs. Therefore, to address this emergent need, we develop a realistic simulation of the wind tower construction process in an immersive virtual reality environment aiming at informing workers of the general safety and health hazards associated with the critical processes used in constructing, maintaining, and demolishing wind towers.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133725399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00121
Jen-Shuo Liu, B. Tversky, Steven K. Feiner
Precueing information about upcoming subtasks prior to performing them has the potential to make an entire task faster and easier to accomplish than cueing only the current subtask. Most AR and VR research on precueing has addressed path-following tasks requiring simple actions at a series of locations, such as pushing a button or just visiting that location. We present a testbed for exploring multi-level precueing in a richer task that requires the user to move their hand between specified locations, transporting an object between some of them, and rotating it to a designated orientation.
{"title":"A Testbed for Exploring Multi-Level Precueing in Augmented Reality","authors":"Jen-Shuo Liu, B. Tversky, Steven K. Feiner","doi":"10.1109/VRW55335.2022.00121","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00121","url":null,"abstract":"Precueing information about upcoming subtasks prior to performing them has the potential to make an entire task faster and easier to accomplish than cueing only the current subtask. Most AR and VR research on precueing has addressed path-following tasks requiring simple actions at a series of locations, such as pushing a button or just visiting that location. We present a testbed for exploring multi-level precueing in a richer task that requires the user to move their hand between specified locations, transporting an object between some of them, and rotating it to a designated orientation.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"213 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132232067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}