Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00083
S. Tadeja, Diana Janik, Przemysław Stachura, Maciej Tomecki, Karol Książczak, K. Walas
This paper describes a multi-user, mobile, cross-platform AR system that allows real-time remote collaboration utilizing the digital twinning concept. Thanks to cloud services, the users can collab-oratively manipulate and exchange information using digital twin realized as detailed multi-part 3D models. We also discuss design requirements and task analysis captured using engineering design methodology and the usability verification of our system using a heuristical approach.
{"title":"MARS: A Cross-Platform Mobile AR System for Remote Collaborative Instruction and Installation Support using Digital Twins","authors":"S. Tadeja, Diana Janik, Przemysław Stachura, Maciej Tomecki, Karol Książczak, K. Walas","doi":"10.1109/VRW55335.2022.00083","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00083","url":null,"abstract":"This paper describes a multi-user, mobile, cross-platform AR system that allows real-time remote collaboration utilizing the digital twinning concept. Thanks to cloud services, the users can collab-oratively manipulate and exchange information using digital twin realized as detailed multi-part 3D models. We also discuss design requirements and task analysis captured using engineering design methodology and the usability verification of our system using a heuristical approach.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123378023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00238
Kota Takahashi, Y. Inoue, M. Kitazaki
It is important for us to maintain appropriate interpersonal distance depending on situations in effective and safe communications. We aimed to investigate the effects of speech loudness and clarity on the interpersonal distance towards an avatar in a virtual environment. We found that the louder speech of the avatar made the distance between the participants and the avatar larger than the quiet speech, but the clarity of the speech did not significantly affect the distance. These results suggest that the perception of loudness modulates the interpersonal distance towards the virtual avatar to maintain the intimate equilibrium.
{"title":"Interpersonal Distance to a Speaking Avatar: Loudness Matters Irrespective of Contents","authors":"Kota Takahashi, Y. Inoue, M. Kitazaki","doi":"10.1109/VRW55335.2022.00238","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00238","url":null,"abstract":"It is important for us to maintain appropriate interpersonal distance depending on situations in effective and safe communications. We aimed to investigate the effects of speech loudness and clarity on the interpersonal distance towards an avatar in a virtual environment. We found that the louder speech of the avatar made the distance between the participants and the avatar larger than the quiet speech, but the clarity of the speech did not significantly affect the distance. These results suggest that the perception of loudness modulates the interpersonal distance towards the virtual avatar to maintain the intimate equilibrium.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124962934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00191
Zhiwei Zhu, Mikhail Sizintsev, Glen Murray, Han-Pang Chiu, Ali Z. Chaudhry, S. Samarasekera, Rakesh Kumar
This paper describes a system that provides general head-worn outdoor augmented reality (AR) capability for the user inside a moving vehicle. Our system follows the concept of combining pose estimation from both vehicle navigation system and wearable sensors to address the failure of commercial AR devices inside a moving vehicle. We continuously match natural visual features from the camera against a prebuilt database of interior vehicle scenes. To improve the robustness in a moving vehicle with other passengers, a human detection module is adapted to filter out people from the camera scene. Experiments demonstrate the effectiveness of the proposed solution.
{"title":"Head-Worn Markerless Augmented Reality Inside A Moving Vehicle","authors":"Zhiwei Zhu, Mikhail Sizintsev, Glen Murray, Han-Pang Chiu, Ali Z. Chaudhry, S. Samarasekera, Rakesh Kumar","doi":"10.1109/VRW55335.2022.00191","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00191","url":null,"abstract":"This paper describes a system that provides general head-worn outdoor augmented reality (AR) capability for the user inside a moving vehicle. Our system follows the concept of combining pose estimation from both vehicle navigation system and wearable sensors to address the failure of commercial AR devices inside a moving vehicle. We continuously match natural visual features from the camera against a prebuilt database of interior vehicle scenes. To improve the robustness in a moving vehicle with other passengers, a human detection module is adapted to filter out people from the camera scene. Experiments demonstrate the effectiveness of the proposed solution.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122983047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.48550/arXiv.2203.02186
Joaquim A. Jorge, Pedro Belchior, A. Gomes, Maurício Sousa, J. Pereira, J. Uhl
Virtual Reality has become an important educational tool, due to the pandemic and increasing globalization of education. In this paper, we present a framework for teaching Virtual Anatomy at the uni-versity level. Because of the isolation and quarantine requirements and the increased international collaboration, virtual classes have become a staple of today's curricula. Our work builds on the Vis-ible Human Projects for Virtual Dissection material and provides a medium for groups of students to do collaborative anatomical dissections in real-time using sketching and 3D visualizations and audio coupled with interactive 2D tablets for precise drawing. We describe the system architecture, compare requirements with those of a previous development [1] and discuss the preliminary results. Discussions with Anatomists show that this is an effective tool. We introduce avenues for further research and discuss collaboration challenges posed by this context.
{"title":"Anatomy Studio II A Cross-Reality Application for Teaching Anatomy","authors":"Joaquim A. Jorge, Pedro Belchior, A. Gomes, Maurício Sousa, J. Pereira, J. Uhl","doi":"10.48550/arXiv.2203.02186","DOIUrl":"https://doi.org/10.48550/arXiv.2203.02186","url":null,"abstract":"Virtual Reality has become an important educational tool, due to the pandemic and increasing globalization of education. In this paper, we present a framework for teaching Virtual Anatomy at the uni-versity level. Because of the isolation and quarantine requirements and the increased international collaboration, virtual classes have become a staple of today's curricula. Our work builds on the Vis-ible Human Projects for Virtual Dissection material and provides a medium for groups of students to do collaborative anatomical dissections in real-time using sketching and 3D visualizations and audio coupled with interactive 2D tablets for precise drawing. We describe the system architecture, compare requirements with those of a previous development [1] and discuss the preliminary results. Discussions with Anatomists show that this is an effective tool. We introduce avenues for further research and discuss collaboration challenges posed by this context.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121683397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00181
Y. P. Gururaj, Raghav Mittal, Sai Anirudh Karre, Y. R. Reddy, Syed Azeemuddin
Locomotion in Virtual Reality (VR) acts as a motion tracking unit for simulating user movements based on the Degree-of-Freedom (DOF) of the application. For effective locomotion, VR practitioners may have to transform their hardware from 3-DOF to 6-DOF. In this context, we conducted a literature review on different motion tracking methods employed in the Head-Mounted-Devices (HMD) to understand such hardware transformation to conduct locomotion in VR. Our observations led us to formulate a taxonomy of the tracking methods for locomotion in VR based on system design. Our study also captures different metrics that VR practitioners use to evaluate the hardware based on the context, performance, and significance for conducting locomotion.
{"title":"Towards Conducting Effective Locomotion Through Hardware Transformation in Head-Mounted-Device - A Review Study","authors":"Y. P. Gururaj, Raghav Mittal, Sai Anirudh Karre, Y. R. Reddy, Syed Azeemuddin","doi":"10.1109/VRW55335.2022.00181","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00181","url":null,"abstract":"Locomotion in Virtual Reality (VR) acts as a motion tracking unit for simulating user movements based on the Degree-of-Freedom (DOF) of the application. For effective locomotion, VR practitioners may have to transform their hardware from 3-DOF to 6-DOF. In this context, we conducted a literature review on different motion tracking methods employed in the Head-Mounted-Devices (HMD) to understand such hardware transformation to conduct locomotion in VR. Our observations led us to formulate a taxonomy of the tracking methods for locomotion in VR based on system design. Our study also captures different metrics that VR practitioners use to evaluate the hardware based on the context, performance, and significance for conducting locomotion.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122546880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00205
Keigo Matsumoto, Takuji Narumi
This study examines the relationship between sensory processing patterns and the effects of redirected walking (RDW). Research efforts have been devoted to identifying the detection threshold (DT) of the RDW techniques, and various DTs have been reported in different studies. Recently, age, sex, and spatial ability have been found to be associated with the DTs of RDW techniques. A preliminary examination was conducted on the relationship between sensory processing patterns, as measured by the Adolescents/Adult Sensory Profile, and the DT of curvature gains, one of the fundamental RDW techniques, and it was suggested that the higher sensory sensitivity tendencies were associated with lower DT, i.e., participants were more likely to notice the RDW technique.
{"title":"Relationship Between the Sensory Processing Patterns and the Detection Threshold of Curvature Gain","authors":"Keigo Matsumoto, Takuji Narumi","doi":"10.1109/VRW55335.2022.00205","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00205","url":null,"abstract":"This study examines the relationship between sensory processing patterns and the effects of redirected walking (RDW). Research efforts have been devoted to identifying the detection threshold (DT) of the RDW techniques, and various DTs have been reported in different studies. Recently, age, sex, and spatial ability have been found to be associated with the DTs of RDW techniques. A preliminary examination was conducted on the relationship between sensory processing patterns, as measured by the Adolescents/Adult Sensory Profile, and the DT of curvature gains, one of the fundamental RDW techniques, and it was suggested that the higher sensory sensitivity tendencies were associated with lower DT, i.e., participants were more likely to notice the RDW technique.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123914853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00332
Jesus Ugarte, Nahal Norouzi, A. Erickson, G. Bruder, G. Welch
Recent augmented reality (AR) head-mounted displays support shared experiences among multiple users in real physical spaces. While previous research looked at different embodied methods to enhance interpersonal communication cues, so far, less research looked at distant interaction in AR and, in particular, distant hand communication, which can open up new possibilities for scenarios, such as large-group collaboration. In this demonstration, we present a research framework for distant hand interaction in AR, including mapping techniques and visualizations. Our techniques are inspired by virtual reality (VR) distant hand interactions, but had to be adjusted due to the different context in AR and limited knowledge about the physical environment. We discuss different techniques for hand communication, including deictic pointing at a distance, distant drawing in AR, and distant communication through symbolic hand gestures.
{"title":"Distant Hand Interaction Framework in Augmented Reality","authors":"Jesus Ugarte, Nahal Norouzi, A. Erickson, G. Bruder, G. Welch","doi":"10.1109/VRW55335.2022.00332","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00332","url":null,"abstract":"Recent augmented reality (AR) head-mounted displays support shared experiences among multiple users in real physical spaces. While previous research looked at different embodied methods to enhance interpersonal communication cues, so far, less research looked at distant interaction in AR and, in particular, distant hand communication, which can open up new possibilities for scenarios, such as large-group collaboration. In this demonstration, we present a research framework for distant hand interaction in AR, including mapping techniques and visualizations. Our techniques are inspired by virtual reality (VR) distant hand interactions, but had to be adjusted due to the different context in AR and limited knowledge about the physical environment. We discuss different techniques for hand communication, including deictic pointing at a distance, distant drawing in AR, and distant communication through symbolic hand gestures.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123940625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00108
Allison Bayro, Yalda Ghasemi, Heejin Jeong
Remote collaboration in virtual reality has gained attention and proved to be a viable solution for providing effective collaboration environments for physically distant collaborators. This study compares head-mounted display (HMD)- and computer-based remote collaboration solutions that allow users to interact with each other through immersive environments. Analyzing remote collaboration in immersive environments requires understanding group interactions and personal experiences. For this purpose, a 3D object assembly task was performed by 10 participants using self-reported surveys and physiological measures to investigate the effectiveness of collaboration from the users' perspective. The results showed that the HMD-based remote collaboration in a virtual reality environment increased the sense of co-presence among the users.
{"title":"Subjective and Objective Analyses of Collaboration and Co-Presence in a Virtual Reality Remote Environment","authors":"Allison Bayro, Yalda Ghasemi, Heejin Jeong","doi":"10.1109/VRW55335.2022.00108","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00108","url":null,"abstract":"Remote collaboration in virtual reality has gained attention and proved to be a viable solution for providing effective collaboration environments for physically distant collaborators. This study compares head-mounted display (HMD)- and computer-based remote collaboration solutions that allow users to interact with each other through immersive environments. Analyzing remote collaboration in immersive environments requires understanding group interactions and personal experiences. For this purpose, a 3D object assembly task was performed by 10 participants using self-reported surveys and physiological measures to investigate the effectiveness of collaboration from the users' perspective. The results showed that the HMD-based remote collaboration in a virtual reality environment increased the sense of co-presence among the users.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123969711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00212
Tomoyo Kikuchi, Yuchi Yahagi, S. Fukushima, Saki Sakaguchi, T. Naemura
We propose “AIR-range”- a system that seamlessly connects mid-air images from the surface of a table to mid-air space. This system can display tall mid-air images in the three-dimensional (3D) space beyond the screen. AIR-range is implemented using a symmetrical mirror structure that displays a large image by integrating multiple imaging paths. The mirror arrangement in previous research had a problem in that the luminance was discontinuous. In this study, we theorize the relationship between the parameters of optical elements and the appearance of mid-air images and optimize an optical system to minimize the difference in luminance between image paths.
{"title":"AIR-range: Arranging optical systems to present mid-AIR images with continuous luminance on and above a tabletop","authors":"Tomoyo Kikuchi, Yuchi Yahagi, S. Fukushima, Saki Sakaguchi, T. Naemura","doi":"10.1109/VRW55335.2022.00212","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00212","url":null,"abstract":"We propose “AIR-range”- a system that seamlessly connects mid-air images from the surface of a table to mid-air space. This system can display tall mid-air images in the three-dimensional (3D) space beyond the screen. AIR-range is implemented using a symmetrical mirror structure that displays a large image by integrating multiple imaging paths. The mirror arrangement in previous research had a problem in that the luminance was discontinuous. In this study, we theorize the relationship between the parameters of optical elements and the appearance of mid-air images and optimize an optical system to minimize the difference in luminance between image paths.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124012087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-01DOI: 10.1109/VRW55335.2022.00041
Mengxing Li, Ying Song, Bo Wang
In the process of metaverse construction, in order to achieve better interaction, it is necessary to provide clear semantic information for each object. Image classification technology plays a very important role in this process. Based on CMT transformer and improved Cross-Shaped Window Self-Attention, this paper presents an improved Image classification framework combining CNN and transformers, which is called CWCT transformer. Due to the high resolution of the image, vision transformers will lead to too high model complexity and too much calculation. To solve this problem, CWCT captures local features by using optimized Cross-Window Self-Attention mechanism and global features by using convolutional neural networks (CNN) stack. This structure has the flexibility to model at various scales and has linear computational complexity concerning image size. Compared with the original CMT network, the classification accuracy has been improved on ImageNet-1k and randomly screened Tiny-ImageNet dataset. Thanks to the optimized Cross-Window Self-Attention, the CWCT proposed in this paper has a significant improvement in operation speed and model complexity compared with CMT.
{"title":"CWCT: An Effective Vision Transformer using improved Cross-Window Self-Attention and CNN","authors":"Mengxing Li, Ying Song, Bo Wang","doi":"10.1109/VRW55335.2022.00041","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00041","url":null,"abstract":"In the process of metaverse construction, in order to achieve better interaction, it is necessary to provide clear semantic information for each object. Image classification technology plays a very important role in this process. Based on CMT transformer and improved Cross-Shaped Window Self-Attention, this paper presents an improved Image classification framework combining CNN and transformers, which is called CWCT transformer. Due to the high resolution of the image, vision transformers will lead to too high model complexity and too much calculation. To solve this problem, CWCT captures local features by using optimized Cross-Window Self-Attention mechanism and global features by using convolutional neural networks (CNN) stack. This structure has the flexibility to model at various scales and has linear computational complexity concerning image size. Compared with the original CMT network, the classification accuracy has been improved on ImageNet-1k and randomly screened Tiny-ImageNet dataset. Thanks to the optimized Cross-Window Self-Attention, the CWCT proposed in this paper has a significant improvement in operation speed and model complexity compared with CMT.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125391571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}