首页 > 最新文献

2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)最新文献

英文 中文
Manipulation of Motion Parallax Gain Distorts Perceived Distance and Object Depth in Virtual Reality 操纵运动视差增益扭曲感知距离和物体深度在虚拟现实
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00055
Xue Teng, R. Allison, L. Wilcox
Virtual reality (VR) is distinguished by the rich, multimodal, im-mersive sensory information and affordances provided to the user. However, when moving about an immersive virtual world the vi-sual display often conflicts with other sensory cues due to design, the nature of the simulation, or to system limitations (for example impoverished vestibular motion cues during acceleration in racing games). Given that conflicts between sensory cues have been as-sociated with disorientation or discomfort, and theoretically could distort spatial perception, it is important that we understand how and when they are manifested in the user experience. To this end, this set of experiments investigates the impact of mismatch between physical and virtual motion parallax on the per-ception of the depth of an apparently perpendicular dihedral angle (a fold) and its distance. We applied gain distortions between visual and kinesthetic head motion during lateral sway movements and measured the effect of gain on depth, distance and lateral space compression. We found that under monocular viewing, observers made smaller object depth and distance settings especially when the gain was greater than 1. Estimates of target distance declined with increasing gain under monocular viewing. Similarly, mean set depth decreased with increasing gain under monocular viewing, except at 6.0 m. The effect of gain was minimal when observers viewed the stimulus binocularly. Further, binocular viewing (stereopsis) improved the precision but not necessarily the accuracy of gain perception. Overall, the lateral compression of space was similar in the stereoscopic and monocular test conditions. Taken together, our results show that the use of large presentation distances (at 6 m) combined with binocular cues to depth and distance enhanced humans' tolerance to visual and kinesthetic mismatch.
虚拟现实(VR)的特点是为用户提供丰富的、多模式的、身临其境的感官信息和启示。然而,当在沉浸式虚拟世界中移动时,由于设计、模拟的性质或系统限制(例如赛车游戏加速过程中的前庭运动线索),视觉显示经常与其他感官线索发生冲突。考虑到感官线索之间的冲突与迷失方向或不适有关,理论上可能会扭曲空间感知,我们了解它们在用户体验中是如何以及何时表现出来的,这一点很重要。为此,本实验研究了物理运动视差与虚拟运动视差不匹配对明显垂直二面角(a折)深度及其距离感知的影响。我们在横向摇摆运动中应用视觉和动觉头部运动之间的增益扭曲,并测量增益对深度、距离和横向空间压缩的影响。我们发现,在单目观察下,观察者对物体深度和距离的设置较小,特别是当增益大于1时。在单目观察下,目标距离估计值随增益的增加而下降。同样,在单目观察下,除6.0 m外,平均集深随增益的增加而减小。当观察者用双眼观察刺激时,增益的影响是最小的。此外,双目视觉(立体视觉)提高了精度,但不一定是增益感知的准确性。总体而言,在立体和单眼测试条件下,空间侧压相似。综上所述,我们的研究结果表明,使用较大的呈现距离(6米)与双眼对深度和距离的提示相结合,增强了人类对视觉和动觉不匹配的容忍度。
{"title":"Manipulation of Motion Parallax Gain Distorts Perceived Distance and Object Depth in Virtual Reality","authors":"Xue Teng, R. Allison, L. Wilcox","doi":"10.1109/VR55154.2023.00055","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00055","url":null,"abstract":"Virtual reality (VR) is distinguished by the rich, multimodal, im-mersive sensory information and affordances provided to the user. However, when moving about an immersive virtual world the vi-sual display often conflicts with other sensory cues due to design, the nature of the simulation, or to system limitations (for example impoverished vestibular motion cues during acceleration in racing games). Given that conflicts between sensory cues have been as-sociated with disorientation or discomfort, and theoretically could distort spatial perception, it is important that we understand how and when they are manifested in the user experience. To this end, this set of experiments investigates the impact of mismatch between physical and virtual motion parallax on the per-ception of the depth of an apparently perpendicular dihedral angle (a fold) and its distance. We applied gain distortions between visual and kinesthetic head motion during lateral sway movements and measured the effect of gain on depth, distance and lateral space compression. We found that under monocular viewing, observers made smaller object depth and distance settings especially when the gain was greater than 1. Estimates of target distance declined with increasing gain under monocular viewing. Similarly, mean set depth decreased with increasing gain under monocular viewing, except at 6.0 m. The effect of gain was minimal when observers viewed the stimulus binocularly. Further, binocular viewing (stereopsis) improved the precision but not necessarily the accuracy of gain perception. Overall, the lateral compression of space was similar in the stereoscopic and monocular test conditions. Taken together, our results show that the use of large presentation distances (at 6 m) combined with binocular cues to depth and distance enhanced humans' tolerance to visual and kinesthetic mismatch.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115404036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing Visual Attention with Leading and Following Virtual Agents in a Collaborative Perception-Action Task in VR 虚拟现实协同感知-行动任务中视觉注意与领导和跟随虚拟代理的比较
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00031
Sai-Keung Wong, Matias Volonte, Kuan-Yu Liu, Elham Ebrahimi, Sabarish V. Babu
This paper presents a within-subject study to investigate the effects of leading and following behaviors on user visual attention behaviors when collaborating with a virtual agent (VA) during performing transportation tasks in immersive virtual environments. The task was to carry a target object from a location to a predefined location. There were two conditions, namely leader VA (LVA) and follower VA (FVA). The leader gave instructions to the follower to perform actions. In the FVA condition, users played the leader role, while they played the follower role in the LVA condition. The users and the VA communicated via spoken language. During the experiment, participants wore a head-mounted display and performed real walking in a room. In each condition, each participant performed 20 trials of object transportation for different types of objects. Our preliminary results revealed significant differences in the user visual attention behaviors between the follower and leader VA conditions during the transportation tasks.
本文提出了一项主题内研究,探讨了在沉浸式虚拟环境中与虚拟代理(VA)合作执行交通任务时,领导和跟随行为对用户视觉注意行为的影响。任务是将目标对象从一个位置携带到一个预定义的位置。有两种情况,即领导者VA (LVA)和追随者VA (FVA)。领导指示追随者采取行动。在FVA条件下,用户扮演领导者角色,而在LVA条件下,用户扮演追随者角色。用户和VA通过口头语言进行交流。在实验中,参与者戴着头戴式显示器,在房间里进行真实的行走。在每种情况下,每个参与者对不同类型的物体进行20次物体运输试验。我们的初步研究结果表明,在运输任务中,跟随者和领导者视觉注意条件下的用户视觉注意行为存在显著差异。
{"title":"Comparing Visual Attention with Leading and Following Virtual Agents in a Collaborative Perception-Action Task in VR","authors":"Sai-Keung Wong, Matias Volonte, Kuan-Yu Liu, Elham Ebrahimi, Sabarish V. Babu","doi":"10.1109/VR55154.2023.00031","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00031","url":null,"abstract":"This paper presents a within-subject study to investigate the effects of leading and following behaviors on user visual attention behaviors when collaborating with a virtual agent (VA) during performing transportation tasks in immersive virtual environments. The task was to carry a target object from a location to a predefined location. There were two conditions, namely leader VA (LVA) and follower VA (FVA). The leader gave instructions to the follower to perform actions. In the FVA condition, users played the leader role, while they played the follower role in the LVA condition. The users and the VA communicated via spoken language. During the experiment, participants wore a head-mounted display and performed real walking in a room. In each condition, each participant performed 20 trials of object transportation for different types of objects. Our preliminary results revealed significant differences in the user visual attention behaviors between the follower and leader VA conditions during the transportation tasks.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123815565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the Social Influence of Virtual Humans Unintentionally Conveying Conflicting Emotions 探索虚拟人无意中传递矛盾情绪的社会影响
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00072
Zubin Choudhary, Nahal Norouzi, A. Erickson, Ryan Schubert, G. Bruder, Gregory F. Welch
The expression of human emotion is integral to social interaction, and in virtual reality it is increasingly common to develop virtual avatars that attempt to convey emotions by mimicking these visual and aural cues, i.e. the facial and vocal expressions. However, errors in (or the absence of) facial tracking can result in the rendering of incorrect facial expressions on these virtual avatars. For example, a virtual avatar may speak with a happy or unhappy vocal inflection while their facial expression remains otherwise neutral. In circumstances where there is conflict between the avatar's facial and vocal expressions, it is possible that users will incorrectly interpret the avatar's emotion, which may have unintended consequences in terms of social influence or in terms of the outcome of the interaction. In this paper, we present a human-subjects study (N = 22) aimed at understanding the impact of conflicting facial and vocal emotional expressions. Specifically we explored three levels of emotional valence (unhappy, neutral, and happy) expressed in both visual (facial) and aural (vocal) forms. We also investigate three levels of head scales (down-scaled, accurate, and up-scaled) to evaluate whether head scale affects user interpretation of the conveyed emotion. We find significant effects of different multimodal expressions on happiness and trust perception, while no significant effect was observed for head scales. Evidence from our results suggest that facial expressions have a stronger impact than vocal expressions. Additionally, as the difference between the two expressions increase, the less predictable the multimodal expression becomes. For example, for the happy-looking and happy-sounding multimodal expression, we expect and see high happiness rating and high trust, however if one of the two expressions change, this mismatch makes the expression less predictable. We discuss the relationships, implications, and guidelines for social applications that aim to leverage multimodal social cues.
人类情感的表达是社会互动不可或缺的一部分,在虚拟现实中,通过模仿这些视觉和听觉线索(即面部和声音表达)来试图传达情感的虚拟化身越来越普遍。然而,面部跟踪的错误(或缺失)可能导致这些虚拟化身呈现不正确的面部表情。例如,虚拟化身可能会用快乐或不快乐的语调说话,而他们的面部表情则保持中性。在虚拟角色的面部表情和声音表达之间存在冲突的情况下,用户可能会错误地理解虚拟角色的情绪,这可能会在社会影响或互动结果方面产生意想不到的后果。在本文中,我们提出了一项人类受试者研究(N = 22),旨在了解面部和声音情感表达冲突的影响。具体来说,我们探索了以视觉(面部)和听觉(声音)形式表达的三个层次的情绪效价(不开心、中性和快乐)。我们还研究了头部量表的三个层次(缩小、准确和扩大),以评估头部量表是否影响用户对所传达情感的解释。我们发现不同的多模态表达对快乐和信任感知有显著影响,而对头部量表没有显著影响。我们的研究结果表明,面部表情比声音表情更有影响力。此外,随着两种表达之间的差异越来越大,多模态表达变得越来越难以预测。例如,对于快乐的外表和听起来快乐的多模态表达,我们期望并看到高幸福评级和高信任,但是如果两种表达中的一种改变,这种不匹配使表达变得难以预测。我们讨论了旨在利用多模态社交线索的社交应用的关系、含义和指导方针。
{"title":"Exploring the Social Influence of Virtual Humans Unintentionally Conveying Conflicting Emotions","authors":"Zubin Choudhary, Nahal Norouzi, A. Erickson, Ryan Schubert, G. Bruder, Gregory F. Welch","doi":"10.1109/VR55154.2023.00072","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00072","url":null,"abstract":"The expression of human emotion is integral to social interaction, and in virtual reality it is increasingly common to develop virtual avatars that attempt to convey emotions by mimicking these visual and aural cues, i.e. the facial and vocal expressions. However, errors in (or the absence of) facial tracking can result in the rendering of incorrect facial expressions on these virtual avatars. For example, a virtual avatar may speak with a happy or unhappy vocal inflection while their facial expression remains otherwise neutral. In circumstances where there is conflict between the avatar's facial and vocal expressions, it is possible that users will incorrectly interpret the avatar's emotion, which may have unintended consequences in terms of social influence or in terms of the outcome of the interaction. In this paper, we present a human-subjects study (N = 22) aimed at understanding the impact of conflicting facial and vocal emotional expressions. Specifically we explored three levels of emotional valence (unhappy, neutral, and happy) expressed in both visual (facial) and aural (vocal) forms. We also investigate three levels of head scales (down-scaled, accurate, and up-scaled) to evaluate whether head scale affects user interpretation of the conveyed emotion. We find significant effects of different multimodal expressions on happiness and trust perception, while no significant effect was observed for head scales. Evidence from our results suggest that facial expressions have a stronger impact than vocal expressions. Additionally, as the difference between the two expressions increase, the less predictable the multimodal expression becomes. For example, for the happy-looking and happy-sounding multimodal expression, we expect and see high happiness rating and high trust, however if one of the two expressions change, this mismatch makes the expression less predictable. We discuss the relationships, implications, and guidelines for social applications that aim to leverage multimodal social cues.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121157583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Remapping Control in VR for Patients with AMD AMD患者VR中的重映射控制
Pub Date : 2023-03-01 DOI: 10.1109/vr55154.2023.00030
Michael Nitsche, B. Bosley, S. Primo, Jisu Park, Daniel Carr
Age-related Macular Degeneration (AMD) is the leading cause of vision loss among persons over 50. We present a two-part interface consisting of a VR-based visualization for AMD patients and an interconnected doctor interface to optimize this VR view. It focuses on remapping imagery to provide customized image optimizations. The system allows doctors to generate a tailored, patient-specific VR visualization. We pilot tested the doctor interface (n=10) with eye care professionals. The results indicate the potential of VR-based eye care for doctors to help visually-impaired patients, but also show a necessary training phase to establish new technologies in vision rehabilitation.
年龄相关性黄斑变性(AMD)是50岁以上人群视力丧失的主要原因。我们提出了一个由两部分组成的界面,包括一个基于VR的AMD患者可视化和一个相互关联的医生界面,以优化这种VR视图。它专注于重新映射图像,以提供定制的图像优化。该系统允许医生生成量身定制的、针对患者的VR可视化。我们在眼科护理专业人员中试点测试了医生界面(n=10)。研究结果表明,基于vr的眼科护理对医生帮助视障患者具有潜力,但也表明了建立视力康复新技术的必要培训阶段。
{"title":"Remapping Control in VR for Patients with AMD","authors":"Michael Nitsche, B. Bosley, S. Primo, Jisu Park, Daniel Carr","doi":"10.1109/vr55154.2023.00030","DOIUrl":"https://doi.org/10.1109/vr55154.2023.00030","url":null,"abstract":"Age-related Macular Degeneration (AMD) is the leading cause of vision loss among persons over 50. We present a two-part interface consisting of a VR-based visualization for AMD patients and an interconnected doctor interface to optimize this VR view. It focuses on remapping imagery to provide customized image optimizations. The system allows doctors to generate a tailored, patient-specific VR visualization. We pilot tested the doctor interface (n=10) with eye care professionals. The results indicate the potential of VR-based eye care for doctors to help visually-impaired patients, but also show a necessary training phase to establish new technologies in vision rehabilitation.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126465027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Where to Render: Studying Renderability for IBR of Large-Scale Scenes 在哪里渲染:研究大规模场景IBR的可渲染性
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00051
Zimu Yi, Ke Xie, Jiahui Lyu, Minglun Gong, Hui Huang
Image-based rendering (IBR) technique enables presenting real scenes interactively to viewers and hence is a key component for implementing VR telepresence. The quality of IBR results depends on the set of pre-captured views, the rendering algorithm used, and the camera parameters of the novel view to be synthesized. Numerous methods were proposed for optimizing the set of captured images and enhancing the rendering algorithms. However, from which regions IBR methods can synthesize satisfactory results is not yet well studied. In this work, we introduce the concept of renderability, which predicts the quality of IBR results at any given viewpoint and view direction. Consequently, the renderability values evaluated for the 5D camera parameter space form a field, which effectively guides viewpoint/trajectory selection for IBR, especially for challenging large-scale 3D scenes. To demonstrate this capability, we designed 2 VR applications: a path planner that allows users to navigate through sparsely captured scenes with controllable rendering quality and a view selector that provides an overview for a scene from diverse and high quality perspectives. We believe the renderability concept, the proposed evaluation method, and the suggested applications will motivate and facilitate the use of IBR in various interactive settings.
基于图像的渲染(IBR)技术能够以交互方式将真实场景呈现给观众,因此是实现VR远程呈现的关键组成部分。IBR结果的质量取决于预先捕获的视图集、使用的渲染算法和要合成的新视图的相机参数。提出了许多优化捕获图像集和改进渲染算法的方法。但是,IBR方法能从哪些区域合成满意的结果还没有得到很好的研究。在这项工作中,我们引入了可渲染性的概念,它可以预测任意给定视点和视图方向下IBR结果的质量。因此,评估5D相机参数空间的可渲染性值形成一个字段,该字段有效地指导IBR的视点/轨迹选择,特别是对于具有挑战性的大规模3D场景。为了展示这种能力,我们设计了2个VR应用程序:一个路径规划器,允许用户在具有可控渲染质量的稀疏捕获场景中导航,一个视图选择器,从不同和高质量的角度提供场景概述。我们相信可呈现性概念、建议的评估方法和建议的应用程序将激励和促进IBR在各种交互式环境中的使用。
{"title":"Where to Render: Studying Renderability for IBR of Large-Scale Scenes","authors":"Zimu Yi, Ke Xie, Jiahui Lyu, Minglun Gong, Hui Huang","doi":"10.1109/VR55154.2023.00051","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00051","url":null,"abstract":"Image-based rendering (IBR) technique enables presenting real scenes interactively to viewers and hence is a key component for implementing VR telepresence. The quality of IBR results depends on the set of pre-captured views, the rendering algorithm used, and the camera parameters of the novel view to be synthesized. Numerous methods were proposed for optimizing the set of captured images and enhancing the rendering algorithms. However, from which regions IBR methods can synthesize satisfactory results is not yet well studied. In this work, we introduce the concept of renderability, which predicts the quality of IBR results at any given viewpoint and view direction. Consequently, the renderability values evaluated for the 5D camera parameter space form a field, which effectively guides viewpoint/trajectory selection for IBR, especially for challenging large-scale 3D scenes. To demonstrate this capability, we designed 2 VR applications: a path planner that allows users to navigate through sparsely captured scenes with controllable rendering quality and a view selector that provides an overview for a scene from diverse and high quality perspectives. We believe the renderability concept, the proposed evaluation method, and the suggested applications will motivate and facilitate the use of IBR in various interactive settings.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127973451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proposal for an aerial display using dynamic projection mapping on a distant flying screen 在远距离飞行屏幕上使用动态投影映射的空中显示方案
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00075
Masatoshi Iuchi, Yuito Hirohashi, H. Oku
In this study, we propose a method for an aerial display. The method uses a high-speed gaze control system and a laser display to perform projection mapping on a screen at a distance, which is suspended from a flying drone. A prototype system was developed and successfully demonstrated dynamic projection mapping on a screen attached to a flying drone at a distance of about 36 m, which indicated the effectiveness of the proposed method.
在这项研究中,我们提出了一种空中显示的方法。该方法使用高速凝视控制系统和激光显示器在远处的屏幕上进行投影映射,该屏幕悬挂在飞行的无人机上。开发了一个原型系统,并成功地在距离约36 m的无人机屏幕上演示了动态投影映射,表明了所提出方法的有效性。
{"title":"Proposal for an aerial display using dynamic projection mapping on a distant flying screen","authors":"Masatoshi Iuchi, Yuito Hirohashi, H. Oku","doi":"10.1109/VR55154.2023.00075","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00075","url":null,"abstract":"In this study, we propose a method for an aerial display. The method uses a high-speed gaze control system and a laser display to perform projection mapping on a screen at a distance, which is suspended from a flying drone. A prototype system was developed and successfully demonstrated dynamic projection mapping on a screen attached to a flying drone at a distance of about 36 m, which indicated the effectiveness of the proposed method.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121418828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
VGTC Virtual Reality Best Dissertation Award VGTC虚拟现实最佳论文奖
Pub Date : 2023-03-01 DOI: 10.1109/vr55154.2023.00089
Praneeth Kumar Chakravarthula
The 2023 VGTC Virtual Reality Best Dissertation Award goes to Praneeth Kumar Chakravarthula, a 2021 graduate from the University of North Carolina at Chapel Hill, for his dissertation entitled “Towards Everyday-use Augmented Reality Eyeglasses”, under the supervision of Prof. Henry Fuchs. Praneeth Chakravarthula is currently a research fellow at Princeton University and a Research Assistant Professor at the University of North Carolina at Chapel Hill. His research interests lie at the intersection of optics, graphics, perception, optimization and machine learning. Dr. Chakravarthula obtained his Ph.D. from UNC Chapel Hill under the supervision of Prof. Henry Fuchs. His Ph.D. dissertation makes progress “towards everyday-use augmented reality eyeglasses” and makes significant advances in three distinct areas: 1) holographic displays and advanced algorithms for generating high-quality true 3D holographic images, 2) hardware and software for robust and comprehensive 3D eye tracking via Purkinje images and 3) automatic focus adjusting AR display eyeglasses for well-focused virtual and real imagery, towards potentially achieving 20/20 vision for users of all ages. Since the eyes cannot focus at very near distances, existing AR/VR head mounted displays use bulky lenses to virtually project the display panel at a long distance that the eyes can comfortably focus. However, this results in not only uncomfortably increasing the bulk of the display but also results in severely affecting the natural functioning of the human visual system by causing
2023年VGTC虚拟现实最佳论文奖授予北卡罗莱纳大学教堂山分校2021届毕业生Praneeth Kumar Chakravarthula,他的论文题为“走向日常使用的增强现实眼镜”,由Henry Fuchs教授指导。Praneeth Chakravarthula目前是普林斯顿大学的研究员,也是北卡罗来纳大学教堂山分校的研究助理教授。他的研究兴趣集中在光学、图形学、感知、优化和机器学习的交叉领域。Chakravarthula博士在北卡罗来纳大学教堂山分校获得博士学位,师从Henry Fuchs教授。他的博士论文在“日常使用的增强现实眼镜”方面取得了进展,并在三个不同的领域取得了重大进展:1)全息显示器和先进的算法,用于生成高质量的真正3D全息图像;2)硬件和软件,用于通过浦肯野图像进行强大而全面的3D眼动追踪;3)自动调焦AR显示眼镜,用于聚焦良好的虚拟和真实图像,可能实现所有年龄用户的20/20视力。由于眼睛不能在很近的距离聚焦,现有的AR/VR头戴式显示器使用笨重的透镜将显示面板虚拟地投射到眼睛可以舒适地聚焦的很远的距离。然而,这不仅会增加显示器的体积,而且还会导致严重影响人类视觉系统的自然功能
{"title":"VGTC Virtual Reality Best Dissertation Award","authors":"Praneeth Kumar Chakravarthula","doi":"10.1109/vr55154.2023.00089","DOIUrl":"https://doi.org/10.1109/vr55154.2023.00089","url":null,"abstract":"The 2023 VGTC Virtual Reality Best Dissertation Award goes to Praneeth Kumar Chakravarthula, a 2021 graduate from the University of North Carolina at Chapel Hill, for his dissertation entitled “Towards Everyday-use Augmented Reality Eyeglasses”, under the supervision of Prof. Henry Fuchs. Praneeth Chakravarthula is currently a research fellow at Princeton University and a Research Assistant Professor at the University of North Carolina at Chapel Hill. His research interests lie at the intersection of optics, graphics, perception, optimization and machine learning. Dr. Chakravarthula obtained his Ph.D. from UNC Chapel Hill under the supervision of Prof. Henry Fuchs. His Ph.D. dissertation makes progress “towards everyday-use augmented reality eyeglasses” and makes significant advances in three distinct areas: 1) holographic displays and advanced algorithms for generating high-quality true 3D holographic images, 2) hardware and software for robust and comprehensive 3D eye tracking via Purkinje images and 3) automatic focus adjusting AR display eyeglasses for well-focused virtual and real imagery, towards potentially achieving 20/20 vision for users of all ages. Since the eyes cannot focus at very near distances, existing AR/VR head mounted displays use bulky lenses to virtually project the display panel at a long distance that the eyes can comfortably focus. However, this results in not only uncomfortably increasing the bulk of the display but also results in severely affecting the natural functioning of the human visual system by causing","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117262773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CompenHR: Efficient Full Compensation for High-resolution Projector CompenHR:高效的全补偿高分辨率投影仪
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00029
Yuxi Wang, H. Ling, Bingyao Huang
Full projector compensation is a practical task of projector-camera systems. It aims to find a projector input image, named compensation image, such that when projected it cancels the geometric and photometric distortions due to the physical environment and hardware. State-of-the-art methods use deep learning to address this problem and show promising performance for low-resolution setups. However, directly applying deep learning to high-resolution setups is impractical due to the long training time and high memory cost. To address this issue, this paper proposes a practical full compensation solution. Firstly, we design an attention-based grid refinement network to improve geometric correction quality. Secondly, we integrate a novel sampling scheme into an end-to-end compensation network to alleviate computation and introduce attention blocks to preserve key features. Finally, we construct a benchmark dataset for high-resolution projector full compensation. In experiments, our method demonstrates clear advantages in both efficiency and quality.
全放映机补偿是放映机-摄像机系统的一项实际任务。它的目的是找到一个投影仪输入图像,命名为补偿图像,这样在投影时,它可以消除由于物理环境和硬件造成的几何和光度畸变。最先进的方法使用深度学习来解决这个问题,并在低分辨率设置中显示出有希望的性能。然而,由于训练时间长,内存成本高,直接将深度学习应用于高分辨率设置是不切实际的。针对这一问题,本文提出了一种实用的全补偿方案。首先,设计了一种基于注意力的网格细化网络,提高几何校正质量。其次,我们将一种新的采样方案集成到端到端补偿网络中以减轻计算量,并引入注意块以保留关键特征。最后,构建了高分辨率投影仪全补偿的基准数据集。实验表明,该方法在效率和质量上都有明显的优势。
{"title":"CompenHR: Efficient Full Compensation for High-resolution Projector","authors":"Yuxi Wang, H. Ling, Bingyao Huang","doi":"10.1109/VR55154.2023.00029","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00029","url":null,"abstract":"Full projector compensation is a practical task of projector-camera systems. It aims to find a projector input image, named compensation image, such that when projected it cancels the geometric and photometric distortions due to the physical environment and hardware. State-of-the-art methods use deep learning to address this problem and show promising performance for low-resolution setups. However, directly applying deep learning to high-resolution setups is impractical due to the long training time and high memory cost. To address this issue, this paper proposes a practical full compensation solution. Firstly, we design an attention-based grid refinement network to improve geometric correction quality. Secondly, we integrate a novel sampling scheme into an end-to-end compensation network to alleviate computation and introduce attention blocks to preserve key features. Finally, we construct a benchmark dataset for high-resolution projector full compensation. In experiments, our method demonstrates clear advantages in both efficiency and quality.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126381648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating Noticeable Hand Redirection in Virtual Reality using Physiological and Interaction Data 利用生理和交互数据研究虚拟现实中明显的手重定向
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00035
Martin Feick, K. P. Regitz, Anthony Tang, Tobias Jungbluth, Maurice Rekrut, Antonio Krüger
Hand redirection is effective so long as the introduced offsets are not noticeably disruptive to users. In this work we investigate the use of physiological and interaction data to detect movement discrepancies between a user's real and virtual hand, pushing towards a novel approach to identify discrepancies which are too large and therefore can be noticed. We ran a study with 22 participants, collecting EEG, ECG, EDA, RSP, and interaction data. Our results suggest that EEG and interaction data can be reliably used to detect visuo-motor discrepancies, whereas ECG and RSP seem to suffer from inconsistencies. Our findings also show that participants quickly adapt to large discrepancies, and that they constantly attempt to establish a stable mental model of their environment. Together, these findings suggest that there is no absolute threshold for possible non-detectable discrepancies; instead, it depends primarily on participants' most recent experience with this kind of interaction.
只要引入的偏移对用户没有明显的干扰,手动重定向就是有效的。在这项工作中,我们研究了使用生理和交互数据来检测用户真实手和虚拟手之间的运动差异,推动了一种新的方法来识别太大的差异,因此可以被注意到。我们对22名参与者进行了一项研究,收集了脑电图、心电图、EDA、RSP和相互作用数据。我们的研究结果表明,脑电图和相互作用数据可以可靠地用于检测视觉-运动差异,而ECG和RSP似乎存在不一致。我们的研究结果还表明,参与者很快适应了巨大的差异,他们不断尝试建立一个稳定的心理模型的环境。总之,这些发现表明,对于可能无法检测到的差异,没有绝对的阈值;相反,它主要取决于参与者最近对这种互动的体验。
{"title":"Investigating Noticeable Hand Redirection in Virtual Reality using Physiological and Interaction Data","authors":"Martin Feick, K. P. Regitz, Anthony Tang, Tobias Jungbluth, Maurice Rekrut, Antonio Krüger","doi":"10.1109/VR55154.2023.00035","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00035","url":null,"abstract":"Hand redirection is effective so long as the introduced offsets are not noticeably disruptive to users. In this work we investigate the use of physiological and interaction data to detect movement discrepancies between a user's real and virtual hand, pushing towards a novel approach to identify discrepancies which are too large and therefore can be noticed. We ran a study with 22 participants, collecting EEG, ECG, EDA, RSP, and interaction data. Our results suggest that EEG and interaction data can be reliably used to detect visuo-motor discrepancies, whereas ECG and RSP seem to suffer from inconsistencies. Our findings also show that participants quickly adapt to large discrepancies, and that they constantly attempt to establish a stable mental model of their environment. Together, these findings suggest that there is no absolute threshold for possible non-detectable discrepancies; instead, it depends primarily on participants' most recent experience with this kind of interaction.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131230749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
AR-MoCap: Using Augmented Reality to Support Motion Capture Acting AR-MoCap:使用增强现实来支持动作捕捉表演
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00047
Alberto Cannavò, F. G. Pratticò, Alberto Bruno, Fabrizio Lamberti
Technology is disrupting the way films involving visual effects are produced. Chroma-key, LED walls, motion capture (mocap), 3D visual storyboards, and simulcams are only a few examples of the many changes introduced in the cinema industry over the last years. Although these technologies are getting commonplace, they are presenting new, unexplored challenges to the actors. In particular, when mocap is used to record the actors' movements with the aim of animating digital character models, an increase in the workload can be easily expected for people on stage. In fact, actors have to largely rely on their imagination to understand what the digitally created characters will be actually seeing and feeling. This paper focuses on this specific domain, and aims to demonstrate how Augmented Reality (AR) can be helpful for actors when shooting mocap scenes. To this purpose, we devised a system named AR-MoCap that can be used by actors for rehearsing the scene in AR on the real set before actually shooting it. Through an Optical See-Through Head-Mounted Display (OST-HMD), an actor can see, e.g., the digital characters of other actors wearing mocap suits overlapped in real-time to their bodies. Experimental results showed that, compared to the traditional approach based on physical props and other cues, the devised system can help the actors to position themselves and direct their gaze while shooting the scene, while also improving spatial and social presence, as well as perceived effectiveness.
科技正在颠覆包含视觉效果的电影的制作方式。色度键、LED墙、动作捕捉(mocap)、3D视觉故事板和模拟摄像机只是过去几年电影行业引入的许多变化的几个例子。尽管这些技术正变得越来越普遍,但它们也给参与者带来了新的、未探索的挑战。特别是,当动作捕捉被用来记录演员的动作,目的是动画数字角色模型,工作量的增加可以很容易地预期在舞台上的人。事实上,演员必须在很大程度上依靠他们的想象力来理解数字创造的角色实际看到和感受到的东西。本文聚焦于这一特定领域,旨在展示增强现实(AR)如何在拍摄动作捕捉场景时为演员提供帮助。为此,我们设计了一个名为AR- mocap的系统,演员可以在实际拍摄前在真实场景中使用AR来排练场景。通过光学透视头戴式显示器(OST-HMD),演员可以看到,例如,穿着动作捕捉服的其他演员的数字角色与他们的身体实时重叠。实验结果表明,与传统的基于物理道具和其他线索的方法相比,所设计的系统可以帮助演员在拍摄场景时定位自己并引导他们的视线,同时还可以提高空间和社会存在以及感知有效性。
{"title":"AR-MoCap: Using Augmented Reality to Support Motion Capture Acting","authors":"Alberto Cannavò, F. G. Pratticò, Alberto Bruno, Fabrizio Lamberti","doi":"10.1109/VR55154.2023.00047","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00047","url":null,"abstract":"Technology is disrupting the way films involving visual effects are produced. Chroma-key, LED walls, motion capture (mocap), 3D visual storyboards, and simulcams are only a few examples of the many changes introduced in the cinema industry over the last years. Although these technologies are getting commonplace, they are presenting new, unexplored challenges to the actors. In particular, when mocap is used to record the actors' movements with the aim of animating digital character models, an increase in the workload can be easily expected for people on stage. In fact, actors have to largely rely on their imagination to understand what the digitally created characters will be actually seeing and feeling. This paper focuses on this specific domain, and aims to demonstrate how Augmented Reality (AR) can be helpful for actors when shooting mocap scenes. To this purpose, we devised a system named AR-MoCap that can be used by actors for rehearsing the scene in AR on the real set before actually shooting it. Through an Optical See-Through Head-Mounted Display (OST-HMD), an actor can see, e.g., the digital characters of other actors wearing mocap suits overlapped in real-time to their bodies. Experimental results showed that, compared to the traditional approach based on physical props and other cues, the devised system can help the actors to position themselves and direct their gaze while shooting the scene, while also improving spatial and social presence, as well as perceived effectiveness.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133506581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1