首页 > 最新文献

2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)最新文献

英文 中文
Empirically Evaluating the Effects of Eye Height and Self-Avatars on Dynamic Passability Affordances in Virtual Reality 眼高和自我形象对虚拟现实动态通行能力影响的实证评估
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00046
Ayush Bhargava, Roshan Venkatakrishnan, R. Venkatakrishnan, Hannah M. Solini, Kathryn M. Lucaites, Andrew C. Robb, C. Pagano, Sabarish V. Babu
Over the past two decades self-avatars have been shown to affect the perception of both oneself and of environmental properties including the sizes and distances of elements in immersive virtual environments. However, virtual avatars that accurately match the body proportions of their users remain inaccessible to the general public. As such, most virtual experiences that represent the user have a generic avatar that does not fit the proportions of the users' body. This can negatively affect judgments involving affordances, such as passability and maneuverability, which pertain to the relationship between the properties of environmental elements relative to the properties of the user providing information about actions that can be enacted. This is especially true when the task requires the user to maneuver around moving objects like in games. Therefore, it is necessary to understand how different sized self-avatars affect the perception of affordances in dynamic virtual environments. To better understand this, we conducted an experiment investigating how a self-avatar that is either the same size, 20% shorter, or 20% taller, than the user's own body affects passability judgments in a dynamic virtual environment. Our results suggest that the presence of self-avatars results in better regulatory and safer road crossing behavior, and helps participants synchronize self-motion to external stimuli quicker than in the absence of self-avatars.
在过去的二十年里,自我形象已经被证明会影响对自己和环境属性的感知,包括沉浸式虚拟环境中元素的大小和距离。然而,与用户身体比例精确匹配的虚拟化身仍然无法为大众所接受。因此,大多数代表用户的虚拟体验都有一个不适合用户身体比例的通用头像。这可能会消极地影响涉及可操作性的判断,例如可通过性和可操作性,这与环境元素的属性与提供可实施的操作信息的用户的属性之间的关系有关。当任务要求用户像在游戏中那样操纵移动物体时尤其如此。因此,有必要了解不同大小的自我化身如何影响动态虚拟环境中启示的感知。为了更好地理解这一点,我们进行了一项实验,调查一个与用户自己的身体大小相同、比自己矮20%或高20%的自我形象如何影响动态虚拟环境中的可通过性判断。我们的研究结果表明,与没有自我形象的情况下相比,自我形象的存在使参与者的过马路行为更规范、更安全,并有助于参与者更快地将自我运动与外部刺激同步。
{"title":"Empirically Evaluating the Effects of Eye Height and Self-Avatars on Dynamic Passability Affordances in Virtual Reality","authors":"Ayush Bhargava, Roshan Venkatakrishnan, R. Venkatakrishnan, Hannah M. Solini, Kathryn M. Lucaites, Andrew C. Robb, C. Pagano, Sabarish V. Babu","doi":"10.1109/VR55154.2023.00046","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00046","url":null,"abstract":"Over the past two decades self-avatars have been shown to affect the perception of both oneself and of environmental properties including the sizes and distances of elements in immersive virtual environments. However, virtual avatars that accurately match the body proportions of their users remain inaccessible to the general public. As such, most virtual experiences that represent the user have a generic avatar that does not fit the proportions of the users' body. This can negatively affect judgments involving affordances, such as passability and maneuverability, which pertain to the relationship between the properties of environmental elements relative to the properties of the user providing information about actions that can be enacted. This is especially true when the task requires the user to maneuver around moving objects like in games. Therefore, it is necessary to understand how different sized self-avatars affect the perception of affordances in dynamic virtual environments. To better understand this, we conducted an experiment investigating how a self-avatar that is either the same size, 20% shorter, or 20% taller, than the user's own body affects passability judgments in a dynamic virtual environment. Our results suggest that the presence of self-avatars results in better regulatory and safer road crossing behavior, and helps participants synchronize self-motion to external stimuli quicker than in the absence of self-avatars.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131374971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Half Title Page 半页标题
Pub Date : 2023-03-01 DOI: 10.1109/vr55154.2023.00001
{"title":"Half Title Page","authors":"","doi":"10.1109/vr55154.2023.00001","DOIUrl":"https://doi.org/10.1109/vr55154.2023.00001","url":null,"abstract":"","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133275218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Development of a Mixed Reality Acupuncture Training System 混合现实针灸训练系统的设计与开发
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00042
Qilei Sun, Jiayou Huang, Haodong Zhang, Paul Craig, Lingyun Yu, Eng Gee Lim
This paper looks at how mixed reality can be used for the improvement and enhancement of Chinese acupuncture practice through the introduction of an acupuncture training simulator. A prototype system developed for our study allows practitioners to insert virtual needles using their bare hands into a full-scale 3D representation of the human body with labelled acupuncture points. This provides them with a safe and natural environment to develop their acupuncture skills simulating the actual physical process of acupuncture. It also helps them to develop their muscle memory for acupuncture and better develops their memory of acupuncture points through a more immersive learning experience. We describe some of the design decisions and technical challenges overcome in the development of our system. We also present the results of a comparative user evaluation with potential users aimed at assessing the viability of such a mixed reality system being used as part of their training and development. The results of our evaluation reveal the training system outperformed in the enhancement of spatial understanding as well as improved learning and dexterity in acupuncture practice. These results go some way to demonstrating the potential of mixed reality for improving practice in therapeutic medicine.
本文通过引入针灸训练模拟器来研究如何将混合现实用于改善和加强中国针灸实践。为我们的研究开发的一个原型系统允许从业者用他们的徒手将虚拟针插入带有标记的穴位的人体的全尺寸3D表示。这为他们提供了一个安全、自然的环境来发展他们的针灸技能,模拟针灸的实际物理过程。它还可以帮助他们发展对针灸的肌肉记忆,并通过更身临其境的学习体验更好地发展他们对穴位的记忆。我们描述了一些设计决策和在我们的系统开发中克服的技术挑战。我们还介绍了与潜在用户进行比较的用户评估结果,目的是评估这种混合现实系统作为其培训和发展的一部分的可行性。我们的评估结果显示,训练系统在增强空间理解和提高学习和灵巧的针灸实践中表现出色。这些结果在某种程度上证明了混合现实在改善治疗医学实践方面的潜力。
{"title":"Design and Development of a Mixed Reality Acupuncture Training System","authors":"Qilei Sun, Jiayou Huang, Haodong Zhang, Paul Craig, Lingyun Yu, Eng Gee Lim","doi":"10.1109/VR55154.2023.00042","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00042","url":null,"abstract":"This paper looks at how mixed reality can be used for the improvement and enhancement of Chinese acupuncture practice through the introduction of an acupuncture training simulator. A prototype system developed for our study allows practitioners to insert virtual needles using their bare hands into a full-scale 3D representation of the human body with labelled acupuncture points. This provides them with a safe and natural environment to develop their acupuncture skills simulating the actual physical process of acupuncture. It also helps them to develop their muscle memory for acupuncture and better develops their memory of acupuncture points through a more immersive learning experience. We describe some of the design decisions and technical challenges overcome in the development of our system. We also present the results of a comparative user evaluation with potential users aimed at assessing the viability of such a mixed reality system being used as part of their training and development. The results of our evaluation reveal the training system outperformed in the enhancement of spatial understanding as well as improved learning and dexterity in acupuncture practice. These results go some way to demonstrating the potential of mixed reality for improving practice in therapeutic medicine.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130849558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Volumetric Avatar Reconstruction with Spatio-Temporally Offset RGBD Cameras 时空偏移RGBD相机的体积头像重建
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00023
Gareth Rendle, A. Kreskowski, Bernd Froehlich
RGBD cameras can capture users and their actions in the real world for reconstruction of photo-realistic volumetric avatars that allow rich interaction between spatially distributed telepresence parties in virtual environments. In this paper, we present and evaluate a system design that enables volumetric avatar reconstruction at increased frame rates. We demonstrate that we can overcome the limited capturing frame rate of commodity RGBD cameras such as the Azure Kinect by dividing a set of cameras into two spatio-temporally offset reconstruction groups and implementing a real-time reconstruction pipeline to fuse the temporally offset RGBD image streams. Comparisons of our proposed system against capture configurations possible with the same number of RGBD cameras indicate that it is beneficial to use a combination of spatially and temporally offset RGBD cameras, allowing increased reconstruction frame rates and scene coverage while producing temporally consistent volumetric avatars.
RGBD相机可以捕捉用户和他们在现实世界中的行为,以重建照片般的体积头像,从而允许虚拟环境中空间分布的远程呈现各方之间进行丰富的交互。在本文中,我们提出并评估了一种系统设计,该系统可以在增加的帧速率下实现体积头像重建。我们证明,通过将一组相机分为两个时空偏移重建组,并实现实时重建管道来融合时间偏移的RGBD图像流,我们可以克服商品RGBD相机(如Azure Kinect)有限的捕获帧率。将我们提出的系统与相同数量的RGBD相机的捕获配置进行比较,表明使用空间和时间偏移RGBD相机的组合是有益的,可以增加重建帧率和场景覆盖,同时产生时间一致的体积头像。
{"title":"Volumetric Avatar Reconstruction with Spatio-Temporally Offset RGBD Cameras","authors":"Gareth Rendle, A. Kreskowski, Bernd Froehlich","doi":"10.1109/VR55154.2023.00023","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00023","url":null,"abstract":"RGBD cameras can capture users and their actions in the real world for reconstruction of photo-realistic volumetric avatars that allow rich interaction between spatially distributed telepresence parties in virtual environments. In this paper, we present and evaluate a system design that enables volumetric avatar reconstruction at increased frame rates. We demonstrate that we can overcome the limited capturing frame rate of commodity RGBD cameras such as the Azure Kinect by dividing a set of cameras into two spatio-temporally offset reconstruction groups and implementing a real-time reconstruction pipeline to fuse the temporally offset RGBD image streams. Comparisons of our proposed system against capture configurations possible with the same number of RGBD cameras indicate that it is beneficial to use a combination of spatially and temporally offset RGBD cameras, allowing increased reconstruction frame rates and scene coverage while producing temporally consistent volumetric avatars.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124561094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measuring the Effect of Stereo Deficiencies on Peripersonal Space Pointing 测量立体缺陷对周边个人空间指向的影响
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00063
Anil Ufuk Batmaz, M. H. Mughrabi, M. Sarac, Mayra Donaji Barrera Machuca, W. Stuerzlinger
State-of-the-art Virtual Reality (VR) and Augmented Reality (AR) headsets rely on singlefocal stereo displays. For objects away from the focal plane, such displays create a vergence-accommodation conflict (VAC), potentially degrading user interaction performance. In this paper, we study how the VAC affects pointing at targets within arm's reach with virtual hand and raycasting interaction in current stereo display systems. We use a previously proposed experimental methodology that extends the ISO 9241–411:2015 multi-directional selection task to enable fair comparisons between selecting targets in different display conditions. We conducted a user study with eighteen participants and the results indicate that participants were faster and had higher throughput in the constant VAC condition with the virtual hand. We hope that our results enable designers to choose more efficient interaction methods in virtual environments.
最先进的虚拟现实(VR)和增强现实(AR)头显依赖于单焦点立体显示器。对于远离焦平面的对象,这样的显示会产生收敛调节冲突(VAC),潜在地降低用户交互性能。在本文中,我们研究了在当前的立体显示系统中,虚拟手和光线投射交互作用如何影响指向手臂可及范围内的目标。我们使用先前提出的实验方法,扩展了ISO 9241-411:2015多向选择任务,以便在不同显示条件下选择目标之间进行公平比较。我们对18名参与者进行了用户研究,结果表明参与者在恒定VAC条件下使用虚拟手的速度更快,吞吐量更高。我们希望我们的研究结果能够让设计师在虚拟环境中选择更有效的交互方式。
{"title":"Measuring the Effect of Stereo Deficiencies on Peripersonal Space Pointing","authors":"Anil Ufuk Batmaz, M. H. Mughrabi, M. Sarac, Mayra Donaji Barrera Machuca, W. Stuerzlinger","doi":"10.1109/VR55154.2023.00063","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00063","url":null,"abstract":"State-of-the-art Virtual Reality (VR) and Augmented Reality (AR) headsets rely on singlefocal stereo displays. For objects away from the focal plane, such displays create a vergence-accommodation conflict (VAC), potentially degrading user interaction performance. In this paper, we study how the VAC affects pointing at targets within arm's reach with virtual hand and raycasting interaction in current stereo display systems. We use a previously proposed experimental methodology that extends the ISO 9241–411:2015 multi-directional selection task to enable fair comparisons between selecting targets in different display conditions. We conducted a user study with eighteen participants and the results indicate that participants were faster and had higher throughput in the constant VAC condition with the virtual hand. We hope that our results enable designers to choose more efficient interaction methods in virtual environments.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129721468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Redirected Walking Based on Historical User Walking Data 基于历史用户步行数据的重定向步行
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00021
Cheng-Wei Fan, Sen-Zhe Xu, Peng Yu, Fang-Lue Zhang, Songhai Zhang
With redirected walking (RDW) technology, people can explore large virtual worlds in smaller physical spaces. RDW controls the trajectory of the user's walking in the physical space through subtle adjustments, so as to minimize the collision between the user and the physical space. Previous predictive algorithms place constraints on the user's path according to the spatial layouts of the virtual environment and work well when applicable, while reactive algorithms are more general for scenarios involving free exploration or uncon-strained movements. However, even in relatively free environments, we can predict the user's walking to a certain extent by analyzing the user's historical walking data, which can help the decision-making of reactive algorithms. This paper proposes a novel RDW method that improves the effect of real-time unrestricted RDW by analyzing and utilizing the user's historical walking data. In this method, the physical space is discretized by considering the user's location and orientation in the physical space. Using the weighted directed graph obtained from the user's historical walking data, we dynamically update the scores of different reachable poses in the physical space during the user's walking. We rank the scores and choose the optimal target position and orientation to guide the user to the best pose. Since simulation experiments have been shown to be effective in many previous RDW studies, we also provide a method to simulate user walking trajectories and generate a dataset. Experiments show that our method outperforms multiple state-of-the-art methods in various environments of different sizes and spatial layouts.
通过重定向行走(RDW)技术,人们可以在较小的物理空间中探索大型虚拟世界。RDW通过细微的调整来控制用户在物理空间中的行走轨迹,最大限度地减少用户与物理空间的碰撞。以前的预测算法根据虚拟环境的空间布局对用户的路径进行约束,在适用的情况下工作得很好,而反应性算法更适用于涉及自由探索或不受约束的运动的场景。然而,即使在相对自由的环境中,我们也可以通过分析用户的历史行走数据,在一定程度上预测用户的行走情况,这有助于响应式算法的决策。本文提出了一种新的RDW方法,通过分析和利用用户的历史行走数据,提高了实时无限制RDW的效果。该方法通过考虑用户在物理空间中的位置和方向,对物理空间进行离散化。利用用户历史行走数据得到的加权有向图,动态更新用户行走过程中物理空间中不同可达姿态的得分。我们对分数进行排序,并选择最佳目标位置和方向,以指导用户达到最佳姿势。由于模拟实验在许多先前的RDW研究中已被证明是有效的,因此我们还提供了一种模拟用户行走轨迹并生成数据集的方法。实验表明,该方法在不同大小和空间布局的环境中优于多种最先进的方法。
{"title":"Redirected Walking Based on Historical User Walking Data","authors":"Cheng-Wei Fan, Sen-Zhe Xu, Peng Yu, Fang-Lue Zhang, Songhai Zhang","doi":"10.1109/VR55154.2023.00021","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00021","url":null,"abstract":"With redirected walking (RDW) technology, people can explore large virtual worlds in smaller physical spaces. RDW controls the trajectory of the user's walking in the physical space through subtle adjustments, so as to minimize the collision between the user and the physical space. Previous predictive algorithms place constraints on the user's path according to the spatial layouts of the virtual environment and work well when applicable, while reactive algorithms are more general for scenarios involving free exploration or uncon-strained movements. However, even in relatively free environments, we can predict the user's walking to a certain extent by analyzing the user's historical walking data, which can help the decision-making of reactive algorithms. This paper proposes a novel RDW method that improves the effect of real-time unrestricted RDW by analyzing and utilizing the user's historical walking data. In this method, the physical space is discretized by considering the user's location and orientation in the physical space. Using the weighted directed graph obtained from the user's historical walking data, we dynamically update the scores of different reachable poses in the physical space during the user's walking. We rank the scores and choose the optimal target position and orientation to guide the user to the best pose. Since simulation experiments have been shown to be effective in many previous RDW studies, we also provide a method to simulate user walking trajectories and generate a dataset. Experiments show that our method outperforms multiple state-of-the-art methods in various environments of different sizes and spatial layouts.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129706989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Neural Biomarkers in Young Adults Resistant to VR Motion Sickness: A Pilot Study of EEG 探索年轻人对VR晕动病的神经生物标志物:脑电图的初步研究
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00048
Gang Li, Katharina Margareta Theresa Pöhlmann, Mark Mcgill, C. Chen, S. Brewster, F. Pollick
VR (Virtual Reality) Motion Sickness (VRMS) refers to purely visually-induced motion sickness. Not everyone is susceptible to VRMS, but if experienced, nausea will often lead users to withdraw from the ongoing VR applications. VRMS represents a serious challenge in the field of VR ergonomics and human factors. Like other neuro-ergonomics researchers did before, this paper considers VRMS as a brain state problem as various etiologies of VRMS support the claim that VRMS is caused by disagreement between the vestibular and visual sensory inputs. However, what sets this work apart from the existing literature is that it explores anti-VRMS brain patterns via electroencephalogram (EEG) in VRMS-resistant individuals. Based on existing datasets of a previous study, we found enhanced theta activity in the left parietal cortex in VRMS-resistant individuals (N= 10) compared to VRMS-susceptible individuals (N=10). Even though the sample size per se is not large, this finding achieved medium effect size. This finding offers new hypotheses regarding how to reduce VRMS by the enhancement of brain functions per se (e.g., via non-invasive transcranial electrostimulation techniques) without the need to redesign the existing VR content.
VR(虚拟现实)晕动病(VRMS)是指纯粹由视觉引起的晕动病。并不是每个人都容易受到VRMS的影响,但如果经历过,恶心往往会导致用户退出正在进行的VR应用。VRMS是虚拟现实人机工程学和人因学领域的一个严峻挑战。与其他神经工效学研究人员之前所做的一样,本文认为VRMS是一种大脑状态问题,因为VRMS的各种病因支持VRMS是由前庭和视觉感觉输入之间的不一致引起的说法。然而,这项工作与现有文献的不同之处在于,它通过脑电图(EEG)探索了vrms抗性个体的抗vrms脑模式。基于先前研究的现有数据集,我们发现与vrms易感个体(N=10)相比,vrms抗性个体(N=10)左顶叶皮层的θ波活动增强。尽管样本量本身并不大,但这一发现达到了中等效应。这一发现为如何通过增强大脑功能本身(例如,通过无创经颅电刺激技术)来减少VRMS提供了新的假设,而无需重新设计现有的VR内容。
{"title":"Exploring Neural Biomarkers in Young Adults Resistant to VR Motion Sickness: A Pilot Study of EEG","authors":"Gang Li, Katharina Margareta Theresa Pöhlmann, Mark Mcgill, C. Chen, S. Brewster, F. Pollick","doi":"10.1109/VR55154.2023.00048","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00048","url":null,"abstract":"VR (Virtual Reality) Motion Sickness (VRMS) refers to purely visually-induced motion sickness. Not everyone is susceptible to VRMS, but if experienced, nausea will often lead users to withdraw from the ongoing VR applications. VRMS represents a serious challenge in the field of VR ergonomics and human factors. Like other neuro-ergonomics researchers did before, this paper considers VRMS as a brain state problem as various etiologies of VRMS support the claim that VRMS is caused by disagreement between the vestibular and visual sensory inputs. However, what sets this work apart from the existing literature is that it explores anti-VRMS brain patterns via electroencephalogram (EEG) in VRMS-resistant individuals. Based on existing datasets of a previous study, we found enhanced theta activity in the left parietal cortex in VRMS-resistant individuals (N= 10) compared to VRMS-susceptible individuals (N=10). Even though the sample size per se is not large, this finding achieved medium effect size. This finding offers new hypotheses regarding how to reduce VRMS by the enhancement of brain functions per se (e.g., via non-invasive transcranial electrostimulation techniques) without the need to redesign the existing VR content.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130192605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing the Reading Experience on AR HMDs by Using Smartphones as Assistive Displays 使用智能手机作为辅助显示增强AR头戴式显示器的阅读体验
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00053
Sunyoung Bang, Woontack Woo
The reading experience on current augmented reality (AR) head mounted displays (HMDs) is often impeded by the devices' low perceived resolution, translucency, and small field of view, especially in situations involving lengthy text. Although many researchers have proposed methods to resolve this issue, the inherent characteristics prevent these displays from delivering a readability on par with that of more traditional displays. As a solution, we explore the use of smartphones as assistive displays to AR HMDs. To validate the feasibility of our approach, we conducted a user study in which we compared a smartphone-assisted hybrid interface against using the HMD only for two different text lengths. The results demonstrate that the hybrid interface yields a lower task load regardless of the text length, although it does not improve task performance. Furthermore, the hybrid interface provides a better experience regarding user comfort, visual fatigue, and perceived readability. Based on these results, we claim that joining the spatial output capabilities of the HMD with the high-resolution display of the smartphone is a viable solution for improving the reading experience in AR.
当前增强现实(AR)头戴式显示器(hmd)的阅读体验经常受到设备低感知分辨率、半透明和小视野的影响,特别是在涉及冗长文本的情况下。尽管许多研究人员提出了解决这个问题的方法,但固有的特性使这些显示器无法提供与传统显示器相当的可读性。作为解决方案,我们探索使用智能手机作为AR头显的辅助显示。为了验证我们方法的可行性,我们进行了一项用户研究,其中我们比较了智能手机辅助的混合界面和仅使用HMD的两种不同文本长度。结果表明,无论文本长度如何,混合界面产生较低的任务负载,尽管它没有提高任务性能。此外,混合界面在用户舒适度、视觉疲劳和可感知的可读性方面提供了更好的体验。基于这些结果,我们声称将HMD的空间输出能力与智能手机的高分辨率显示相结合是改善AR阅读体验的可行解决方案。
{"title":"Enhancing the Reading Experience on AR HMDs by Using Smartphones as Assistive Displays","authors":"Sunyoung Bang, Woontack Woo","doi":"10.1109/VR55154.2023.00053","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00053","url":null,"abstract":"The reading experience on current augmented reality (AR) head mounted displays (HMDs) is often impeded by the devices' low perceived resolution, translucency, and small field of view, especially in situations involving lengthy text. Although many researchers have proposed methods to resolve this issue, the inherent characteristics prevent these displays from delivering a readability on par with that of more traditional displays. As a solution, we explore the use of smartphones as assistive displays to AR HMDs. To validate the feasibility of our approach, we conducted a user study in which we compared a smartphone-assisted hybrid interface against using the HMD only for two different text lengths. The results demonstrate that the hybrid interface yields a lower task load regardless of the text length, although it does not improve task performance. Furthermore, the hybrid interface provides a better experience regarding user comfort, visual fatigue, and perceived readability. Based on these results, we claim that joining the spatial output capabilities of the HMD with the high-resolution display of the smartphone is a viable solution for improving the reading experience in AR.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125627616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fully Automatic Blendshape Generation for Stylized Characters 全自动混合形状生成风格化的人物
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00050
Jingying Wang, Yilin Qiu, Keyu Chen, Yu Ding, Ye Pan
Avatars are one of the most important elements in virtual environments. Real-time facial retargeting technology is of vital importance in AR/VR interactions, the filmmaking, and the entertainment industry, and blendshapes for avatars are one of its important materials. Previous works either focused on the characters with the same topology, which cannot be generalized to universal avatars, or used optimization methods that have high demand on the dataset. In this paper, we adopt the essence of deep learning and feature transfer to realize deformation transfer, thereby generating blendshapes for target avatars based on the given sources. We proposed a Variational Autoencoder (VAE) to extract the latent space of the avatars and then use a Multilayer Perceptron (MLP) model to realize the translation between the latent spaces of the source avatar and target avatars. By decoding the latent code of different blendshapes, we can obtain the blendshapes for the target avatars with the same semantics as that of the source. We qualitatively and quantitatively compared our method with both classical and learning-based methods. The results revealed that the blendshapes generated by our method achieves higher similarity to the groundtruth blendshapes than the state-of-art methods. We also demonstrated that our method can be applied to expression transfer for stylized characters with different topologies.
虚拟角色是虚拟环境中最重要的元素之一。实时面部重定向技术在AR/VR交互、电影制作和娱乐行业中具有重要意义,虚拟人物的混合形状是其重要的材料之一。以往的工作要么集中在具有相同拓扑的字符上,无法推广到通用的字符,要么使用对数据集要求很高的优化方法。在本文中,我们采用深度学习和特征转移的本质来实现变形转移,从而根据给定的源生成目标化身的混合形状。我们提出了一种变分自编码器(VAE)来提取虚拟人物的潜在空间,然后使用多层感知器(MLP)模型来实现源虚拟人物和目标虚拟人物潜在空间之间的转换。通过对不同混合形状的潜码进行解码,可以得到与源图像具有相同语义的目标图像混合形状。我们将我们的方法与经典方法和基于学习的方法进行了定性和定量的比较。结果表明,与现有方法相比,本文方法生成的混合形状与真实混合形状具有更高的相似度。我们还证明了我们的方法可以应用于具有不同拓扑的风格化字符的表达转移。
{"title":"Fully Automatic Blendshape Generation for Stylized Characters","authors":"Jingying Wang, Yilin Qiu, Keyu Chen, Yu Ding, Ye Pan","doi":"10.1109/VR55154.2023.00050","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00050","url":null,"abstract":"Avatars are one of the most important elements in virtual environments. Real-time facial retargeting technology is of vital importance in AR/VR interactions, the filmmaking, and the entertainment industry, and blendshapes for avatars are one of its important materials. Previous works either focused on the characters with the same topology, which cannot be generalized to universal avatars, or used optimization methods that have high demand on the dataset. In this paper, we adopt the essence of deep learning and feature transfer to realize deformation transfer, thereby generating blendshapes for target avatars based on the given sources. We proposed a Variational Autoencoder (VAE) to extract the latent space of the avatars and then use a Multilayer Perceptron (MLP) model to realize the translation between the latent spaces of the source avatar and target avatars. By decoding the latent code of different blendshapes, we can obtain the blendshapes for the target avatars with the same semantics as that of the source. We qualitatively and quantitatively compared our method with both classical and learning-based methods. The results revealed that the blendshapes generated by our method achieves higher similarity to the groundtruth blendshapes than the state-of-art methods. We also demonstrated that our method can be applied to expression transfer for stylized characters with different topologies.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124137195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE VR 2023 Organizing Committee IEEE VR 2023组委会
Pub Date : 2023-03-01 DOI: 10.1109/vr55154.2023.00008
{"title":"IEEE VR 2023 Organizing Committee","authors":"","doi":"10.1109/vr55154.2023.00008","DOIUrl":"https://doi.org/10.1109/vr55154.2023.00008","url":null,"abstract":"","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"264 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116166972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1