首页 > 最新文献

2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)最新文献

英文 中文
Lightweight Scene-aware Rain Sound Simulation for Interactive Virtual Environments 用于交互式虚拟环境的轻量级场景感知雨声模拟
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00038
Haonan Cheng, Shiguang Liu, Jiawan Zhang
We present a lightweight and efficient rain sound synthesis method for interactive virtual environments. Existing rain sound simulation methods require massive superposition of scene-specific precomputed rain sounds, which is excessive memory consumption for virtual reality systems (e.g. video games) with limited audio memory budgets. Facing this issue, we reduce the audio memory budgets by introducing a lightweight rain sound synthesis method which is only based on eight physically-inspired basic rain sounds. First, in order to generate sufficiently various rain sounds with limited sound data, we propose an exponential moving average based frequency domain additive (FDA) synthesis method to extend and modify the pre-computed basic rain sounds. Each rain sound is generated in the frequency domain before conversion back to the time domain, allowing us to extend the rain sound which is free of temporal distortions and discontinuities. Next, we introduce an efficient binaural rendering method to simulate the 3D perception that coheres with the visual scene based on a set of Near-Field Transfer Functions (NFTF). Various results demonstrate that the proposed method drastically decreases the memory cost (77 times compressed) and overcomes the limitations of existing methods in terms of interaction.
提出了一种适用于交互式虚拟环境的轻量级高效雨声合成方法。现有的雨声模拟方法需要大量叠加场景特定的预先计算的雨声,这对于音频内存预算有限的虚拟现实系统(例如视频游戏)来说是过度的内存消耗。面对这个问题,我们通过引入一种轻量级的雨声合成方法来减少音频内存预算,这种方法仅基于八个物理启发的基本雨声。首先,为了在有限的声音数据下产生足够多样的雨声,我们提出了一种基于指数移动平均的频域加性(FDA)合成方法来扩展和修改预先计算的基本雨声。在转换回时域之前,每个雨声都是在频域中产生的,这使我们能够扩展雨声,使其不受时间扭曲和不连续的影响。接下来,我们引入了一种高效的双耳渲染方法,基于一组近场传递函数(NFTF)来模拟与视觉场景一致的3D感知。各种结果表明,所提出的方法大大降低了内存成本(压缩77倍),克服了现有方法在交互方面的局限性。
{"title":"Lightweight Scene-aware Rain Sound Simulation for Interactive Virtual Environments","authors":"Haonan Cheng, Shiguang Liu, Jiawan Zhang","doi":"10.1109/VR55154.2023.00038","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00038","url":null,"abstract":"We present a lightweight and efficient rain sound synthesis method for interactive virtual environments. Existing rain sound simulation methods require massive superposition of scene-specific precomputed rain sounds, which is excessive memory consumption for virtual reality systems (e.g. video games) with limited audio memory budgets. Facing this issue, we reduce the audio memory budgets by introducing a lightweight rain sound synthesis method which is only based on eight physically-inspired basic rain sounds. First, in order to generate sufficiently various rain sounds with limited sound data, we propose an exponential moving average based frequency domain additive (FDA) synthesis method to extend and modify the pre-computed basic rain sounds. Each rain sound is generated in the frequency domain before conversion back to the time domain, allowing us to extend the rain sound which is free of temporal distortions and discontinuities. Next, we introduce an efficient binaural rendering method to simulate the 3D perception that coheres with the visual scene based on a set of Near-Field Transfer Functions (NFTF). Various results demonstrate that the proposed method drastically decreases the memory cost (77 times compressed) and overcomes the limitations of existing methods in terms of interaction.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129570369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
You Make Me Sick! The Effect of Stairs on Presence, Cybersickness, and Perception of Embodied Conversational Agents 你真让我恶心!楼梯对具身会话主体在场、晕机和感知的影响
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00071
Samuel Ang, Amanda Fernandez, Michael Rushforth, J. Quarles
Virtual reality (VR) technologies are used in a diverse range of applications. Many of these involve an embodied conversational agent (ECA), a virtual human who exchanges information with the user. Unfortunately, VR technologies remain inaccessible to many users due to the phenomenon of cybersickness: a collection of negative symptoms such as nausea and headache that can appear when immersed in a simulation. Many factors are believed to affect a user's level of cybersickness, but little is known regarding how these factors may influence a user's opinion of an ECA. In this study, we examined the effects of virtual stairs, a factor associated with increased levels of cybersickness. We recruited 39 participants to complete a simulated airport experience. This involved a simple navigation task followed by a brief conversation with a virtual airport customs agent in Spanish. Participants completed the experience twice, once walking across flat hallways, and once traversing a series of staircases. We collected self-reported ratings of cybersickness, presence, and perception of the ECA. We additionally collected physiological data on heart rate and galvanic skin response. Results indicate that the virtual staircases increased user level's of cybersickness and reduced their perceived realism of the ECA, but increased levels of presence.
虚拟现实(VR)技术的应用范围很广。其中许多都涉及到一个具体化的会话代理(ECA),一个与用户交换信息的虚拟人。不幸的是,由于晕屏现象,许多用户仍然无法接触虚拟现实技术:沉浸在模拟环境中时可能出现恶心和头痛等一系列负面症状。许多因素被认为会影响用户的晕屏程度,但这些因素如何影响用户对ECA的看法却知之甚少。在这项研究中,我们研究了虚拟楼梯的影响,这是一个与晕机程度增加有关的因素。我们招募了39名参与者来完成模拟机场体验。这包括一个简单的导航任务,然后用西班牙语与虚拟机场海关人员进行简短的对话。参与者完成了两次体验,一次是穿过平坦的走廊,一次是穿过一系列的楼梯。我们收集了自我报告的晕机评分、存在感和对ECA的感知。我们还收集了心率和皮肤电反应的生理数据。结果表明,虚拟楼梯增加了用户的晕机程度,降低了他们对ECA的感知真实感,但增加了他们的存在感。
{"title":"You Make Me Sick! The Effect of Stairs on Presence, Cybersickness, and Perception of Embodied Conversational Agents","authors":"Samuel Ang, Amanda Fernandez, Michael Rushforth, J. Quarles","doi":"10.1109/VR55154.2023.00071","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00071","url":null,"abstract":"Virtual reality (VR) technologies are used in a diverse range of applications. Many of these involve an embodied conversational agent (ECA), a virtual human who exchanges information with the user. Unfortunately, VR technologies remain inaccessible to many users due to the phenomenon of cybersickness: a collection of negative symptoms such as nausea and headache that can appear when immersed in a simulation. Many factors are believed to affect a user's level of cybersickness, but little is known regarding how these factors may influence a user's opinion of an ECA. In this study, we examined the effects of virtual stairs, a factor associated with increased levels of cybersickness. We recruited 39 participants to complete a simulated airport experience. This involved a simple navigation task followed by a brief conversation with a virtual airport customs agent in Spanish. Participants completed the experience twice, once walking across flat hallways, and once traversing a series of staircases. We collected self-reported ratings of cybersickness, presence, and perception of the ECA. We additionally collected physiological data on heart rate and galvanic skin response. Results indicate that the virtual staircases increased user level's of cybersickness and reduced their perceived realism of the ECA, but increased levels of presence.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123081110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Investigating Spatial Representation of Learning Content in Virtual Reality Learning Environments 虚拟现实学习环境中学习内容的空间表征研究
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00019
Manshul Belani, Harsh Vardhan Singh, Aman Parnami, Pushpendra Singh
A recent surge in the application of Virtual Reality in education has made VR Learning Environments (VRLEs) prevalent in fields ranging from aviation, medicine, and skill training to teaching factual and conceptual content. In spite of multiple 3D affordances provided by VR, learning content placement in VRLEs has been mostly limited to a static placement in the environment. We conduct two studies to investigate the effect of different spatial representations of learning content in virtual environments on learning outcomes and user experience. In the first study, we studied the effects of placing content at four different places - world-anchored (TV screen placed in the environment), user-anchored (panel anchored to the wrist or head-mounted display of the user) and object-anchored (panel anchored to the object associated with current content) - in the VR environment with forty-two participants in the context of learning how to operate a laser cutting machine through an immersive tutorial. In the follow-up study, twenty-two participants from this study were given the option to choose from these four placements to understand their preferences. The effects of placements were examined on learning outcome measures - knowledge gain, knowledge transfer, cognitive load, user experience, and user preferences. We found that participants preferred user-anchored (controller condition) and object-anchored placement. While knowledge gain, knowledge transfer, and cognitive load were not found to be significantly different between the four conditions, the object-anchored placement scored significantly better than the TV screen and head-mounted display conditions on the user experience scales of attractiveness, stimulation, and novelty.
最近,虚拟现实在教育中的应用激增,使得虚拟现实学习环境(VRLEs)在从航空、医学、技能培训到教授事实和概念内容的各个领域都很流行。尽管VR提供了多种3D功能,但学习内容在vrle中的放置大多局限于环境中的静态放置。我们进行了两项研究,探讨虚拟环境中学习内容的不同空间表征对学习结果和用户体验的影响。在第一项研究中,我们研究了在虚拟现实环境中将内容放置在四个不同位置的效果——世界锚定(电视屏幕放置在环境中)、用户锚定(面板锚定在用户的手腕或头戴式显示器上)和对象锚定(面板锚定在与当前内容相关的对象上)——42名参与者通过沉浸式教程学习如何操作激光切割机。在后续研究中,本研究的22名参与者可以从这四个位置中进行选择,以了解他们的偏好。研究考察了实习对学习结果的影响——知识获得、知识转移、认知负荷、用户体验和用户偏好。我们发现参与者更喜欢用户锚定(控制器条件)和对象锚定放置。虽然四种情境下的知识获取、知识转移和认知负荷均无显著差异,但在吸引力、刺激和新颖性的用户体验量表上,物体固定放置情境的得分明显优于电视屏幕和头戴式显示器情境。
{"title":"Investigating Spatial Representation of Learning Content in Virtual Reality Learning Environments","authors":"Manshul Belani, Harsh Vardhan Singh, Aman Parnami, Pushpendra Singh","doi":"10.1109/VR55154.2023.00019","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00019","url":null,"abstract":"A recent surge in the application of Virtual Reality in education has made VR Learning Environments (VRLEs) prevalent in fields ranging from aviation, medicine, and skill training to teaching factual and conceptual content. In spite of multiple 3D affordances provided by VR, learning content placement in VRLEs has been mostly limited to a static placement in the environment. We conduct two studies to investigate the effect of different spatial representations of learning content in virtual environments on learning outcomes and user experience. In the first study, we studied the effects of placing content at four different places - world-anchored (TV screen placed in the environment), user-anchored (panel anchored to the wrist or head-mounted display of the user) and object-anchored (panel anchored to the object associated with current content) - in the VR environment with forty-two participants in the context of learning how to operate a laser cutting machine through an immersive tutorial. In the follow-up study, twenty-two participants from this study were given the option to choose from these four placements to understand their preferences. The effects of placements were examined on learning outcome measures - knowledge gain, knowledge transfer, cognitive load, user experience, and user preferences. We found that participants preferred user-anchored (controller condition) and object-anchored placement. While knowledge gain, knowledge transfer, and cognitive load were not found to be significantly different between the four conditions, the object-anchored placement scored significantly better than the TV screen and head-mounted display conditions on the user experience scales of attractiveness, stimulation, and novelty.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127060853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring the Effects of Augmented Reality Notification Type and Placement in AR HMD while Walking 探索行走时AR头戴式头盔中增强现实通知类型和位置的影响
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00067
Hyunjin Lee, Woontack Woo
Augmented reality (AR) helps users easily accept information when they are walking by providing virtual information in front of their eyes. However, it remains unclear how to present AR notifications considering the expected user reaction to interruption. Therefore, we investigated to confirm appropriate placement methods for each type by dividing it into notification types that are handled immediately (high) or that are performed later (low). We compared two coordinate systems (display-fixed and body-fixed) and three positions (top, right, and bottom) for the notification placement. We found significant effects of notification type and placement on how notifications are perceived during the AR notification experience. Using a display-fixed coordinate system responded faster for high notification types, whereas using a body-fixed coordinate system resulted in quick walking speed for low ones. As for the position, the high types had a higher notification performance at the bottom position, but the low types had enhanced walking performance at the right position. Based on the finding of our experiment, we suggest some recommendations for the future design of AR notification while walking.
增强现实(AR)通过在用户眼前提供虚拟信息,帮助他们在行走时轻松接受信息。然而,考虑到用户对中断的预期反应,目前还不清楚如何呈现AR通知。因此,我们通过将每种类型划分为立即处理(高)或稍后执行(低)的通知类型来确定适当的放置方法。我们比较了通知放置的两个坐标系统(固定显示和固定主体)和三个位置(上、右和下)。我们发现通知类型和位置对AR通知体验期间如何感知通知有重大影响。对于高通知类型,使用固定显示的坐标系统反应更快,而对于低通知类型,使用固定身体的坐标系统导致行走速度更快。在位置上,高类型在底部位置有更高的通知性能,而低类型在右侧位置有更高的行走性能。基于我们的实验发现,我们对未来步行时AR通知的设计提出了一些建议。
{"title":"Exploring the Effects of Augmented Reality Notification Type and Placement in AR HMD while Walking","authors":"Hyunjin Lee, Woontack Woo","doi":"10.1109/VR55154.2023.00067","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00067","url":null,"abstract":"Augmented reality (AR) helps users easily accept information when they are walking by providing virtual information in front of their eyes. However, it remains unclear how to present AR notifications considering the expected user reaction to interruption. Therefore, we investigated to confirm appropriate placement methods for each type by dividing it into notification types that are handled immediately (high) or that are performed later (low). We compared two coordinate systems (display-fixed and body-fixed) and three positions (top, right, and bottom) for the notification placement. We found significant effects of notification type and placement on how notifications are perceived during the AR notification experience. Using a display-fixed coordinate system responded faster for high notification types, whereas using a body-fixed coordinate system resulted in quick walking speed for low ones. As for the position, the high types had a higher notification performance at the bottom position, but the low types had enhanced walking performance at the right position. Based on the finding of our experiment, we suggest some recommendations for the future design of AR notification while walking.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"173 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126943294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Large-Scale Study of Proxemics and Gaze in Groups 群体中近身学和凝视的大规模研究
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00056
M. R. Miller, C. Deveaux, Eugy Han, Nilam Ram, J. Bailenson
Scholars who study nonverbal behavior have focused an incredible amount of work on proxemics, how close people stand to one another, and mutual gaze, whether or not they are looking at one another. Moreover, many studies have demonstrated a correlation between gaze and distance, and so-called equilibrium theory posits that people modulate gaze and distance to maintain proper levels of nonverbal intimacy. Virtual reality scholars have also focused on these two constructs, both for theoretical reasons, as distance and gaze are often used as proxies for psychological constructs such as social presence, and for methodological reasons, as head orientation and body position are automatically produced by most VR tracking systems. However, to date, the studies of distance and gaze in VR have largely been conducted in laboratory settings, observing behavior of a small number of participants for short periods of time. In this experimental field study, we analyze the proxemics and gaze of 232 participants over two experimental studies who each contributed up to about 240 minutes of tracking data during eight weekly 30-minute social virtual reality sessions. Participants' non-verbal behaviors changed in conjunction with context manipulations and over time. Interpersonal distance increased with the size of the virtual room; and both mutual gaze and interpersonal distance increased over time. Overall, participants oriented their heads toward the center of walls rather than to corners of rectangularly-aligned environments. Finally, statistical models demonstrated that individual differences matter, with pairs and groups maintaining more consistent differences over time than would be predicted by chance. Implications for theory and practice are discussed.
研究非语言行为的学者们把大量的工作集中在近距离学上,即人们彼此站得有多近,以及他们是否在看对方。此外,许多研究已经证明了凝视和距离之间的相关性,所谓的平衡理论认为,人们调节凝视和距离以保持适当的非语言亲密程度。虚拟现实学者也关注这两个构形,一方面是理论原因,因为距离和凝视经常被用作社会存在等心理构形的代理,另一方面是方法原因,因为大多数VR跟踪系统会自动产生头部方向和身体位置。然而,迄今为止,VR中距离和凝视的研究主要是在实验室环境中进行的,在短时间内观察少数参与者的行为。在这项实验领域研究中,我们分析了232名参与者在两项实验研究中的近距和凝视,这些参与者在每周8次30分钟的社交虚拟现实会议中贡献了大约240分钟的跟踪数据。参与者的非语言行为随着语境操纵和时间的推移而改变。人际距离随着虚拟房间的大小而增加;相互凝视和人际距离都随着时间的推移而增加。总的来说,参与者将他们的头朝向墙的中心,而不是矩形环境的角落。最后,统计模型表明,个体差异很重要,随着时间的推移,成对和群体的差异比偶然预测的更一致。讨论了理论和实践意义。
{"title":"A Large-Scale Study of Proxemics and Gaze in Groups","authors":"M. R. Miller, C. Deveaux, Eugy Han, Nilam Ram, J. Bailenson","doi":"10.1109/VR55154.2023.00056","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00056","url":null,"abstract":"Scholars who study nonverbal behavior have focused an incredible amount of work on proxemics, how close people stand to one another, and mutual gaze, whether or not they are looking at one another. Moreover, many studies have demonstrated a correlation between gaze and distance, and so-called equilibrium theory posits that people modulate gaze and distance to maintain proper levels of nonverbal intimacy. Virtual reality scholars have also focused on these two constructs, both for theoretical reasons, as distance and gaze are often used as proxies for psychological constructs such as social presence, and for methodological reasons, as head orientation and body position are automatically produced by most VR tracking systems. However, to date, the studies of distance and gaze in VR have largely been conducted in laboratory settings, observing behavior of a small number of participants for short periods of time. In this experimental field study, we analyze the proxemics and gaze of 232 participants over two experimental studies who each contributed up to about 240 minutes of tracking data during eight weekly 30-minute social virtual reality sessions. Participants' non-verbal behaviors changed in conjunction with context manipulations and over time. Interpersonal distance increased with the size of the virtual room; and both mutual gaze and interpersonal distance increased over time. Overall, participants oriented their heads toward the center of walls rather than to corners of rectangularly-aligned environments. Finally, statistical models demonstrated that individual differences matter, with pairs and groups maintaining more consistent differences over time than would be predicted by chance. Implications for theory and practice are discussed.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"25 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114020766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
EEE VR 2023 Table of Contents EEE VR 2023目录
Pub Date : 2023-03-01 DOI: 10.1109/vr55154.2023.00004
{"title":"EEE VR 2023 Table of Contents","authors":"","doi":"10.1109/vr55154.2023.00004","DOIUrl":"https://doi.org/10.1109/vr55154.2023.00004","url":null,"abstract":"","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123133105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing Scatterplot Variants for Temporal Trends Visualization in Immersive Virtual Environments 沉浸式虚拟环境中时间趋势可视化的散点图变量比较
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00082
Carlos Quijano-Chavez, L. Nedel, C. Freitas
Trends are changes in variables or attributes over time, often represented by line plots or scatterplot variants, with time being one of the axes. Interpreting tendencies and estimating trends require observing the lines or points behavior regarding increments, decrements, or both (reversals) in the value of the observed variable. Previous work assessed variants of scatterplots like Animation, Small Multiples, and Overlaid Trails for comparing the effectiveness of trends representation using large and small displays and found differences between them. In this work, we study how best to enable the analyst to explore and perform temporal trend tasks with these same techniques in immersive virtual environments. We designed and conducted a user study based on the approaches followed by previous works regarding visualization and interaction techniques, as well as tasks for comparisons in three-dimensional settings. Results show that Overlaid Trails are the fastest overall, followed by Animation and Small Multiples, while accuracy is task-dependent. We also report results from interaction measures and questionnaires.
趋势是变量或属性随时间的变化,通常用线形图或散点图变体表示,时间是轴之一。解释趋势和估计趋势需要观察与观察变量值的增量、减量或两者(反转)有关的线或点的行为。之前的工作评估了散点图的变体,如动画、小倍数和覆盖轨迹,以比较使用大型和小型显示器的趋势表示的有效性,并发现它们之间的差异。在这项工作中,我们研究了如何最好地使分析师能够在沉浸式虚拟环境中使用这些相同的技术探索和执行时间趋势任务。我们根据之前关于可视化和交互技术的工作所遵循的方法,以及在三维环境中进行比较的任务,设计并进行了一项用户研究。结果表明,覆盖轨迹是最快的,其次是动画和小倍数,而精度是任务相关的。我们还报告了互动测量和问卷调查的结果。
{"title":"Comparing Scatterplot Variants for Temporal Trends Visualization in Immersive Virtual Environments","authors":"Carlos Quijano-Chavez, L. Nedel, C. Freitas","doi":"10.1109/VR55154.2023.00082","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00082","url":null,"abstract":"Trends are changes in variables or attributes over time, often represented by line plots or scatterplot variants, with time being one of the axes. Interpreting tendencies and estimating trends require observing the lines or points behavior regarding increments, decrements, or both (reversals) in the value of the observed variable. Previous work assessed variants of scatterplots like Animation, Small Multiples, and Overlaid Trails for comparing the effectiveness of trends representation using large and small displays and found differences between them. In this work, we study how best to enable the analyst to explore and perform temporal trend tasks with these same techniques in immersive virtual environments. We designed and conducted a user study based on the approaches followed by previous works regarding visualization and interaction techniques, as well as tasks for comparisons in three-dimensional settings. Results show that Overlaid Trails are the fastest overall, followed by Animation and Small Multiples, while accuracy is task-dependent. We also report results from interaction measures and questionnaires.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131183736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tell Me Where To Go: Voice-Controlled Hands-Free Locomotion for Virtual Reality Systems 告诉我去哪里:虚拟现实系统的语音控制免提运动
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00028
J. Hombeck, Henrik Voigt, Timo Heggemann, R. Datta, K. Lawonn
As locomotion is an important factor in improving Virtual Reality (VR) immersion and usability, research in this area has been and continues to be a crucial aspect for the success of VR applications. In recent years, a variety of techniques have been developed and evaluated, ranging from abstract control, vehicle, and teleportation techniques to more realistic techniques such as motion, gestures, and gaze. However, when it comes to hands-free scenarios, for example to increase the overall accessibility of an application or in medical scenarios under sterile conditions, most of the announced techniques cannot be applied. This is where the use of speech as an intuitive means of navigation comes in handy. As systems become more capable of understanding and producing speech, voice interfaces become a valuable alternative for input on all types of devices. This takes the quality of hands-free interaction to a new level. However, intuitive user-assisted speech interaction is difficult to realize due to semantic ambiguities in natural language utterances as well as the high real-time requirements of these systems. In this paper, we investigate steering-based locomotion and selection-based locomotion using three speech-based, hands-free methods and compare them with leaning as an established alternative. Our results show that landmark-based locomotion is a convenient, fast, and intuitive way to move between locations in a VR scene. Furthermore, we show that in scenarios where landmarks are not available, number grid-based navigation is a successful solution. Based on this, we conclude that speech is a suitable alternative in hands-free scenar-ios, and exciting ideas are emerging for future work focused on developing hands-free ad hoc navigation systems for scenes where landmarks do not exist or are difficult to articulate or recognize.
由于运动是提高虚拟现实(VR)沉浸感和可用性的重要因素,因此该领域的研究一直是并将继续是VR应用成功的关键方面。近年来,各种各样的技术已经被开发和评估,从抽象控制,车辆和传送技术到更现实的技术,如运动,手势和凝视。然而,当涉及到免提场景时,例如为了增加应用程序的整体可访问性或在无菌条件下的医疗场景中,大多数宣布的技术都无法应用。这就是使用语音作为一种直观的导航方式派上用场的地方。随着系统理解和产生语音的能力越来越强,语音接口成为所有类型设备输入的有价值的替代选择。这将免提交互的质量提升到了一个新的水平。然而,由于自然语言话语的语义歧义以及这些系统对实时性的高要求,难以实现直观的用户辅助语音交互。在本文中,我们使用三种基于语音的、免提的方法来研究基于转向的运动和基于选择的运动,并将它们与作为既定替代方案的学习进行比较。我们的研究结果表明,在VR场景中,基于地标的移动是一种方便、快速和直观的方式。此外,我们表明,在没有地标的情况下,基于数字网格的导航是一种成功的解决方案。基于此,我们得出结论,语音在免提场景中是一种合适的替代方案,未来的工作重点是开发免提特别导航系统,用于不存在地标或难以清晰表达或识别的场景。
{"title":"Tell Me Where To Go: Voice-Controlled Hands-Free Locomotion for Virtual Reality Systems","authors":"J. Hombeck, Henrik Voigt, Timo Heggemann, R. Datta, K. Lawonn","doi":"10.1109/VR55154.2023.00028","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00028","url":null,"abstract":"As locomotion is an important factor in improving Virtual Reality (VR) immersion and usability, research in this area has been and continues to be a crucial aspect for the success of VR applications. In recent years, a variety of techniques have been developed and evaluated, ranging from abstract control, vehicle, and teleportation techniques to more realistic techniques such as motion, gestures, and gaze. However, when it comes to hands-free scenarios, for example to increase the overall accessibility of an application or in medical scenarios under sterile conditions, most of the announced techniques cannot be applied. This is where the use of speech as an intuitive means of navigation comes in handy. As systems become more capable of understanding and producing speech, voice interfaces become a valuable alternative for input on all types of devices. This takes the quality of hands-free interaction to a new level. However, intuitive user-assisted speech interaction is difficult to realize due to semantic ambiguities in natural language utterances as well as the high real-time requirements of these systems. In this paper, we investigate steering-based locomotion and selection-based locomotion using three speech-based, hands-free methods and compare them with leaning as an established alternative. Our results show that landmark-based locomotion is a convenient, fast, and intuitive way to move between locations in a VR scene. Furthermore, we show that in scenarios where landmarks are not available, number grid-based navigation is a successful solution. Based on this, we conclude that speech is a suitable alternative in hands-free scenar-ios, and exciting ideas are emerging for future work focused on developing hands-free ad hoc navigation systems for scenes where landmarks do not exist or are difficult to articulate or recognize.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"285 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122473598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-View Visual Geo-Localization for Outdoor Augmented Reality 面向户外增强现实的交叉视角视觉地理定位
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00064
Niluthpol Chowdhury Mithun, Kshitij Minhas, Han-Pang Chiu, T. Oskiper, Mikhail Sizintsev, S. Samarasekera, Rakesh Kumar
Precise estimation of global orientation and location is critical to ensure a compelling outdoor Augmented Reality (AR) experience. We address the problem of geo-pose estimation by cross-view matching of query ground images to a geo-referenced aerial satellite image database. Recently, neural network-based methods have shown state-of-the-art performance in cross-view matching. However, most of the prior works focus only on location estimation, ignoring orientation, which cannot meet the requirements in outdoor AR applications. We propose a new transformer neural network-based model and a modified triplet ranking loss for joint location and orientation estimation. Experiments on several benchmark cross-view geo-localization datasets show that our model achieves state-of-the-art performance. Furthermore, we present an approach to extend the single image query-based geo-localization approach by utilizing temporal information from a navigation pipeline for robust continuous geo-localization. Experimentation on several large-scale real-world video sequences demonstrates that our approach enables high-precision and stable AR insertion.
精确估计全球方向和位置对于确保令人信服的户外增强现实(AR)体验至关重要。我们通过将查询地面图像与地理参考航空卫星图像数据库进行交叉视图匹配来解决地理姿态估计问题。近年来,基于神经网络的方法在交叉视图匹配中表现出了最先进的性能。然而,以往的工作大多只关注位置估计,忽略了方向,无法满足户外AR应用的要求。我们提出了一种新的基于变压器神经网络的模型和一种改进的三重秩损失来估计关节的位置和方向。在几个基准交叉视角地理定位数据集上的实验表明,我们的模型达到了最先进的性能。此外,我们提出了一种方法,通过利用来自导航管道的时间信息来扩展基于单幅图像查询的地理定位方法,以实现鲁棒的连续地理定位。在几个大规模真实视频序列上的实验表明,我们的方法可以实现高精度和稳定的AR插入。
{"title":"Cross-View Visual Geo-Localization for Outdoor Augmented Reality","authors":"Niluthpol Chowdhury Mithun, Kshitij Minhas, Han-Pang Chiu, T. Oskiper, Mikhail Sizintsev, S. Samarasekera, Rakesh Kumar","doi":"10.1109/VR55154.2023.00064","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00064","url":null,"abstract":"Precise estimation of global orientation and location is critical to ensure a compelling outdoor Augmented Reality (AR) experience. We address the problem of geo-pose estimation by cross-view matching of query ground images to a geo-referenced aerial satellite image database. Recently, neural network-based methods have shown state-of-the-art performance in cross-view matching. However, most of the prior works focus only on location estimation, ignoring orientation, which cannot meet the requirements in outdoor AR applications. We propose a new transformer neural network-based model and a modified triplet ranking loss for joint location and orientation estimation. Experiments on several benchmark cross-view geo-localization datasets show that our model achieves state-of-the-art performance. Furthermore, we present an approach to extend the single image query-based geo-localization approach by utilizing temporal information from a navigation pipeline for robust continuous geo-localization. Experimentation on several large-scale real-world video sequences demonstrates that our approach enables high-precision and stable AR insertion.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127180150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
International Program Supercommittee 国际项目超级委员会
Pub Date : 2023-03-01 DOI: 10.1109/vr55154.2023.00009
{"title":"International Program Supercommittee","authors":"","doi":"10.1109/vr55154.2023.00009","DOIUrl":"https://doi.org/10.1109/vr55154.2023.00009","url":null,"abstract":"","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123063103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1