Pub Date : 2023-03-01DOI: 10.1109/vr55154.2023.00001
{"title":"Half Title Page","authors":"","doi":"10.1109/vr55154.2023.00001","DOIUrl":"https://doi.org/10.1109/vr55154.2023.00001","url":null,"abstract":"","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133275218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00044
Haoran Yun, J. L. Ponton, C. Andújar, N. Pelechano
The use of self-avatars is gaining popularity thanks to affordable VR headsets. Unfortunately, mainstream VR devices often use a small number of trackers and provide low-accuracy animations. Previous studies have shown that the Sense of Embodiment, and in particular the Sense of Agency, depends on the extent to which the avatar's movements mimic the user's movements. However, few works study such effect for tasks requiring a precise interaction with the environment, i.e., tasks that require accurate manipulation, precise foot stepping, or correct body poses. In these cases, users are likely to notice inconsistencies between their self-avatars and their actual pose. In this paper, we study the impact of the animation fidelity of the user avatar on a variety of tasks that focus on arm movement, leg movement and body posture. We compare three different animation techniques: two of them using Inverse Kinematics to reconstruct the pose from sparse input (6 trackers), and a third one using a professional motion capture system with 17 inertial sensors. We evaluate these animation techniques both quantitatively (completion time, unintentional collisions, pose accuracy) and qualitatively (Sense of Embodiment). Our results show that the animation quality affects the Sense of Embodiment. Inertial-based MoCap performs significantly better in mimicking body poses. Surprisingly, IK-based solutions using fewer sensors outperformed MoCap in tasks requiring accurate positioning, which we attribute to the higher latency and the positional drift that causes errors at the end-effectors, which are more noticeable in contact areas such as the feet.
{"title":"Animation Fidelity in Self-Avatars: Impact on User Performance and Sense of Agency","authors":"Haoran Yun, J. L. Ponton, C. Andújar, N. Pelechano","doi":"10.1109/VR55154.2023.00044","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00044","url":null,"abstract":"The use of self-avatars is gaining popularity thanks to affordable VR headsets. Unfortunately, mainstream VR devices often use a small number of trackers and provide low-accuracy animations. Previous studies have shown that the Sense of Embodiment, and in particular the Sense of Agency, depends on the extent to which the avatar's movements mimic the user's movements. However, few works study such effect for tasks requiring a precise interaction with the environment, i.e., tasks that require accurate manipulation, precise foot stepping, or correct body poses. In these cases, users are likely to notice inconsistencies between their self-avatars and their actual pose. In this paper, we study the impact of the animation fidelity of the user avatar on a variety of tasks that focus on arm movement, leg movement and body posture. We compare three different animation techniques: two of them using Inverse Kinematics to reconstruct the pose from sparse input (6 trackers), and a third one using a professional motion capture system with 17 inertial sensors. We evaluate these animation techniques both quantitatively (completion time, unintentional collisions, pose accuracy) and qualitatively (Sense of Embodiment). Our results show that the animation quality affects the Sense of Embodiment. Inertial-based MoCap performs significantly better in mimicking body poses. Surprisingly, IK-based solutions using fewer sensors outperformed MoCap in tasks requiring accurate positioning, which we attribute to the higher latency and the positional drift that causes errors at the end-effectors, which are more noticeable in contact areas such as the feet.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129157241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00046
Ayush Bhargava, Roshan Venkatakrishnan, R. Venkatakrishnan, Hannah M. Solini, Kathryn M. Lucaites, Andrew C. Robb, C. Pagano, Sabarish V. Babu
Over the past two decades self-avatars have been shown to affect the perception of both oneself and of environmental properties including the sizes and distances of elements in immersive virtual environments. However, virtual avatars that accurately match the body proportions of their users remain inaccessible to the general public. As such, most virtual experiences that represent the user have a generic avatar that does not fit the proportions of the users' body. This can negatively affect judgments involving affordances, such as passability and maneuverability, which pertain to the relationship between the properties of environmental elements relative to the properties of the user providing information about actions that can be enacted. This is especially true when the task requires the user to maneuver around moving objects like in games. Therefore, it is necessary to understand how different sized self-avatars affect the perception of affordances in dynamic virtual environments. To better understand this, we conducted an experiment investigating how a self-avatar that is either the same size, 20% shorter, or 20% taller, than the user's own body affects passability judgments in a dynamic virtual environment. Our results suggest that the presence of self-avatars results in better regulatory and safer road crossing behavior, and helps participants synchronize self-motion to external stimuli quicker than in the absence of self-avatars.
{"title":"Empirically Evaluating the Effects of Eye Height and Self-Avatars on Dynamic Passability Affordances in Virtual Reality","authors":"Ayush Bhargava, Roshan Venkatakrishnan, R. Venkatakrishnan, Hannah M. Solini, Kathryn M. Lucaites, Andrew C. Robb, C. Pagano, Sabarish V. Babu","doi":"10.1109/VR55154.2023.00046","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00046","url":null,"abstract":"Over the past two decades self-avatars have been shown to affect the perception of both oneself and of environmental properties including the sizes and distances of elements in immersive virtual environments. However, virtual avatars that accurately match the body proportions of their users remain inaccessible to the general public. As such, most virtual experiences that represent the user have a generic avatar that does not fit the proportions of the users' body. This can negatively affect judgments involving affordances, such as passability and maneuverability, which pertain to the relationship between the properties of environmental elements relative to the properties of the user providing information about actions that can be enacted. This is especially true when the task requires the user to maneuver around moving objects like in games. Therefore, it is necessary to understand how different sized self-avatars affect the perception of affordances in dynamic virtual environments. To better understand this, we conducted an experiment investigating how a self-avatar that is either the same size, 20% shorter, or 20% taller, than the user's own body affects passability judgments in a dynamic virtual environment. Our results suggest that the presence of self-avatars results in better regulatory and safer road crossing behavior, and helps participants synchronize self-motion to external stimuli quicker than in the absence of self-avatars.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131374971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00023
Gareth Rendle, A. Kreskowski, Bernd Froehlich
RGBD cameras can capture users and their actions in the real world for reconstruction of photo-realistic volumetric avatars that allow rich interaction between spatially distributed telepresence parties in virtual environments. In this paper, we present and evaluate a system design that enables volumetric avatar reconstruction at increased frame rates. We demonstrate that we can overcome the limited capturing frame rate of commodity RGBD cameras such as the Azure Kinect by dividing a set of cameras into two spatio-temporally offset reconstruction groups and implementing a real-time reconstruction pipeline to fuse the temporally offset RGBD image streams. Comparisons of our proposed system against capture configurations possible with the same number of RGBD cameras indicate that it is beneficial to use a combination of spatially and temporally offset RGBD cameras, allowing increased reconstruction frame rates and scene coverage while producing temporally consistent volumetric avatars.
{"title":"Volumetric Avatar Reconstruction with Spatio-Temporally Offset RGBD Cameras","authors":"Gareth Rendle, A. Kreskowski, Bernd Froehlich","doi":"10.1109/VR55154.2023.00023","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00023","url":null,"abstract":"RGBD cameras can capture users and their actions in the real world for reconstruction of photo-realistic volumetric avatars that allow rich interaction between spatially distributed telepresence parties in virtual environments. In this paper, we present and evaluate a system design that enables volumetric avatar reconstruction at increased frame rates. We demonstrate that we can overcome the limited capturing frame rate of commodity RGBD cameras such as the Azure Kinect by dividing a set of cameras into two spatio-temporally offset reconstruction groups and implementing a real-time reconstruction pipeline to fuse the temporally offset RGBD image streams. Comparisons of our proposed system against capture configurations possible with the same number of RGBD cameras indicate that it is beneficial to use a combination of spatially and temporally offset RGBD cameras, allowing increased reconstruction frame rates and scene coverage while producing temporally consistent volumetric avatars.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124561094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The recent popularity of consumer-grade virtual reality devices has enabled users to experience immersive shopping in virtual environments. As in a real-world store, the placement of products in a virtual store should appeal to shoppers, which could be time-consuming, tedious, and non-trivial to create manually. Thus, this work introduces a novel approach for automatically optimizing product placement in virtual stores. Our approach considers product exposure and spatial constraints, applying an optimizer to search for optimal product placement solutions. We conducted qualitative scene rationality and quantitative product exposure experiments to validate our approach with users. The results show that the proposed approach can synthesize reasonable product placements and increase product exposures for different virtual stores.
{"title":"Optimizing Product Placement for Virtual Stores","authors":"Wei Liang, Luhui Wang, Xinzhe Yu, Changyang Li, Rawan Alghofaili, Yining Lang, L. Yu","doi":"10.1109/VR55154.2023.00049","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00049","url":null,"abstract":"The recent popularity of consumer-grade virtual reality devices has enabled users to experience immersive shopping in virtual environments. As in a real-world store, the placement of products in a virtual store should appeal to shoppers, which could be time-consuming, tedious, and non-trivial to create manually. Thus, this work introduces a novel approach for automatically optimizing product placement in virtual stores. Our approach considers product exposure and spatial constraints, applying an optimizer to search for optimal product placement solutions. We conducted qualitative scene rationality and quantitative product exposure experiments to validate our approach with users. The results show that the proposed approach can synthesize reasonable product placements and increase product exposures for different virtual stores.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121120190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00039
Chun Wei Ooi, Yuichi Hiroi, Yuta Itoh
Occlusion is a crucial visual element in optical see-through (OST) augmented reality, however, implementing occlusion in OST displays while addressing various design trade-offs is a difficult problem. In contrast to the traditional method of using spatial light modulators (SLMs) for the occlusion mask, using photochromic materials as occlusion masks can effectively eliminate diffraction artifacts in see-through views due to the lack of electronic pixels, thus providing superior see-through image quality. However, this design requires UV illumination to activate the photochromic mate-rial, which traditionally requires multiple SLMs, resulting in a larger form factor for the system. This paper presents a compact photochromic occlusion-capable OST design using multilayer, wavelength-dependent holographic optical lenses (HOLs). Our approach employs a single digital mi-cromirror display (DMD) to form both the occlusion mask with UV light and a virtual image with visible light in a time-multiplexed man-ner. We demonstrate our proof-of-concept system on a bench-top setup and assess the appearance and contrasts of the displayed image. We also suggest potential improvements for current prototypes to encourage the community to explore this occlusion approach.
{"title":"A Compact Photochromic Occlusion Capable See-through Display with Holographic Lenses","authors":"Chun Wei Ooi, Yuichi Hiroi, Yuta Itoh","doi":"10.1109/VR55154.2023.00039","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00039","url":null,"abstract":"Occlusion is a crucial visual element in optical see-through (OST) augmented reality, however, implementing occlusion in OST displays while addressing various design trade-offs is a difficult problem. In contrast to the traditional method of using spatial light modulators (SLMs) for the occlusion mask, using photochromic materials as occlusion masks can effectively eliminate diffraction artifacts in see-through views due to the lack of electronic pixels, thus providing superior see-through image quality. However, this design requires UV illumination to activate the photochromic mate-rial, which traditionally requires multiple SLMs, resulting in a larger form factor for the system. This paper presents a compact photochromic occlusion-capable OST design using multilayer, wavelength-dependent holographic optical lenses (HOLs). Our approach employs a single digital mi-cromirror display (DMD) to form both the occlusion mask with UV light and a virtual image with visible light in a time-multiplexed man-ner. We demonstrate our proof-of-concept system on a bench-top setup and assess the appearance and contrasts of the displayed image. We also suggest potential improvements for current prototypes to encourage the community to explore this occlusion approach.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115425024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With redirected walking (RDW) technology, people can explore large virtual worlds in smaller physical spaces. RDW controls the trajectory of the user's walking in the physical space through subtle adjustments, so as to minimize the collision between the user and the physical space. Previous predictive algorithms place constraints on the user's path according to the spatial layouts of the virtual environment and work well when applicable, while reactive algorithms are more general for scenarios involving free exploration or uncon-strained movements. However, even in relatively free environments, we can predict the user's walking to a certain extent by analyzing the user's historical walking data, which can help the decision-making of reactive algorithms. This paper proposes a novel RDW method that improves the effect of real-time unrestricted RDW by analyzing and utilizing the user's historical walking data. In this method, the physical space is discretized by considering the user's location and orientation in the physical space. Using the weighted directed graph obtained from the user's historical walking data, we dynamically update the scores of different reachable poses in the physical space during the user's walking. We rank the scores and choose the optimal target position and orientation to guide the user to the best pose. Since simulation experiments have been shown to be effective in many previous RDW studies, we also provide a method to simulate user walking trajectories and generate a dataset. Experiments show that our method outperforms multiple state-of-the-art methods in various environments of different sizes and spatial layouts.
{"title":"Redirected Walking Based on Historical User Walking Data","authors":"Cheng-Wei Fan, Sen-Zhe Xu, Peng Yu, Fang-Lue Zhang, Songhai Zhang","doi":"10.1109/VR55154.2023.00021","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00021","url":null,"abstract":"With redirected walking (RDW) technology, people can explore large virtual worlds in smaller physical spaces. RDW controls the trajectory of the user's walking in the physical space through subtle adjustments, so as to minimize the collision between the user and the physical space. Previous predictive algorithms place constraints on the user's path according to the spatial layouts of the virtual environment and work well when applicable, while reactive algorithms are more general for scenarios involving free exploration or uncon-strained movements. However, even in relatively free environments, we can predict the user's walking to a certain extent by analyzing the user's historical walking data, which can help the decision-making of reactive algorithms. This paper proposes a novel RDW method that improves the effect of real-time unrestricted RDW by analyzing and utilizing the user's historical walking data. In this method, the physical space is discretized by considering the user's location and orientation in the physical space. Using the weighted directed graph obtained from the user's historical walking data, we dynamically update the scores of different reachable poses in the physical space during the user's walking. We rank the scores and choose the optimal target position and orientation to guide the user to the best pose. Since simulation experiments have been shown to be effective in many previous RDW studies, we also provide a method to simulate user walking trajectories and generate a dataset. Experiments show that our method outperforms multiple state-of-the-art methods in various environments of different sizes and spatial layouts.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129706989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00063
Anil Ufuk Batmaz, M. H. Mughrabi, M. Sarac, Mayra Donaji Barrera Machuca, W. Stuerzlinger
State-of-the-art Virtual Reality (VR) and Augmented Reality (AR) headsets rely on singlefocal stereo displays. For objects away from the focal plane, such displays create a vergence-accommodation conflict (VAC), potentially degrading user interaction performance. In this paper, we study how the VAC affects pointing at targets within arm's reach with virtual hand and raycasting interaction in current stereo display systems. We use a previously proposed experimental methodology that extends the ISO 9241–411:2015 multi-directional selection task to enable fair comparisons between selecting targets in different display conditions. We conducted a user study with eighteen participants and the results indicate that participants were faster and had higher throughput in the constant VAC condition with the virtual hand. We hope that our results enable designers to choose more efficient interaction methods in virtual environments.
{"title":"Measuring the Effect of Stereo Deficiencies on Peripersonal Space Pointing","authors":"Anil Ufuk Batmaz, M. H. Mughrabi, M. Sarac, Mayra Donaji Barrera Machuca, W. Stuerzlinger","doi":"10.1109/VR55154.2023.00063","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00063","url":null,"abstract":"State-of-the-art Virtual Reality (VR) and Augmented Reality (AR) headsets rely on singlefocal stereo displays. For objects away from the focal plane, such displays create a vergence-accommodation conflict (VAC), potentially degrading user interaction performance. In this paper, we study how the VAC affects pointing at targets within arm's reach with virtual hand and raycasting interaction in current stereo display systems. We use a previously proposed experimental methodology that extends the ISO 9241–411:2015 multi-directional selection task to enable fair comparisons between selecting targets in different display conditions. We conducted a user study with eighteen participants and the results indicate that participants were faster and had higher throughput in the constant VAC condition with the virtual hand. We hope that our results enable designers to choose more efficient interaction methods in virtual environments.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129721468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00048
Gang Li, Katharina Margareta Theresa Pöhlmann, Mark Mcgill, C. Chen, S. Brewster, F. Pollick
VR (Virtual Reality) Motion Sickness (VRMS) refers to purely visually-induced motion sickness. Not everyone is susceptible to VRMS, but if experienced, nausea will often lead users to withdraw from the ongoing VR applications. VRMS represents a serious challenge in the field of VR ergonomics and human factors. Like other neuro-ergonomics researchers did before, this paper considers VRMS as a brain state problem as various etiologies of VRMS support the claim that VRMS is caused by disagreement between the vestibular and visual sensory inputs. However, what sets this work apart from the existing literature is that it explores anti-VRMS brain patterns via electroencephalogram (EEG) in VRMS-resistant individuals. Based on existing datasets of a previous study, we found enhanced theta activity in the left parietal cortex in VRMS-resistant individuals (N= 10) compared to VRMS-susceptible individuals (N=10). Even though the sample size per se is not large, this finding achieved medium effect size. This finding offers new hypotheses regarding how to reduce VRMS by the enhancement of brain functions per se (e.g., via non-invasive transcranial electrostimulation techniques) without the need to redesign the existing VR content.
{"title":"Exploring Neural Biomarkers in Young Adults Resistant to VR Motion Sickness: A Pilot Study of EEG","authors":"Gang Li, Katharina Margareta Theresa Pöhlmann, Mark Mcgill, C. Chen, S. Brewster, F. Pollick","doi":"10.1109/VR55154.2023.00048","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00048","url":null,"abstract":"VR (Virtual Reality) Motion Sickness (VRMS) refers to purely visually-induced motion sickness. Not everyone is susceptible to VRMS, but if experienced, nausea will often lead users to withdraw from the ongoing VR applications. VRMS represents a serious challenge in the field of VR ergonomics and human factors. Like other neuro-ergonomics researchers did before, this paper considers VRMS as a brain state problem as various etiologies of VRMS support the claim that VRMS is caused by disagreement between the vestibular and visual sensory inputs. However, what sets this work apart from the existing literature is that it explores anti-VRMS brain patterns via electroencephalogram (EEG) in VRMS-resistant individuals. Based on existing datasets of a previous study, we found enhanced theta activity in the left parietal cortex in VRMS-resistant individuals (N= 10) compared to VRMS-susceptible individuals (N=10). Even though the sample size per se is not large, this finding achieved medium effect size. This finding offers new hypotheses regarding how to reduce VRMS by the enhancement of brain functions per se (e.g., via non-invasive transcranial electrostimulation techniques) without the need to redesign the existing VR content.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130192605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}