首页 > 最新文献

2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)最新文献

英文 中文
IMPReSS: Improved Multi-Touch Progressive Refinement Selection Strategy IMPReSS:改进的多点触控逐步优化选择策略
Elaheh Samimi, Robert J. Teather
We developed a progressive refinement technique for VR object selection using a smartphone as a controller. Our technique, IMPReSS, combines conventional progressive refinement selection with the marking menu-based CountMarks. CountMarks uses multi-finger touch gestures to “short-circuit” multi-item marking menus, allowing users to indicate a specific item in a sub-menu by pressing a specific number of fingers on the screen while swiping in the direction of the desired menu. IMPReSS uses this idea to reduce the number of refinements necessary during progressive refinement selection. We compared our technique with SQUAD and a multi-touch technique in terms of search time, selection time, and accuracy. The results showed that IMPReSS was both the fastest and most accurate of the techniques, likely due to a combination of tactile feedback from the smartphone screen and the advantage of fewer refinement steps.
我们开发了一种使用智能手机作为控制器的VR对象选择的渐进改进技术。我们的技术,IMPReSS,结合了传统的渐进式细化选择和基于菜单的标记CountMarks。CountMarks使用多指触摸手势来“短路”多项标记菜单,允许用户通过在屏幕上按特定数量的手指来指示子菜单中的特定项目,同时在所需菜单的方向上滑动。IMPReSS使用这个想法来减少在逐步优化选择过程中必要的优化数量。我们将我们的技术与SQUAD和多点触控技术在搜索时间、选择时间和准确性方面进行了比较。结果显示,IMPReSS是这些技术中最快、最准确的,这可能是由于智能手机屏幕的触觉反馈和更少的改进步骤的结合。
{"title":"IMPReSS: Improved Multi-Touch Progressive Refinement Selection Strategy","authors":"Elaheh Samimi, Robert J. Teather","doi":"10.1109/VRW55335.2022.00069","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00069","url":null,"abstract":"We developed a progressive refinement technique for VR object selection using a smartphone as a controller. Our technique, IMPReSS, combines conventional progressive refinement selection with the marking menu-based CountMarks. CountMarks uses multi-finger touch gestures to “short-circuit” multi-item marking menus, allowing users to indicate a specific item in a sub-menu by pressing a specific number of fingers on the screen while swiping in the direction of the desired menu. IMPReSS uses this idea to reduce the number of refinements necessary during progressive refinement selection. We compared our technique with SQUAD and a multi-touch technique in terms of search time, selection time, and accuracy. The results showed that IMPReSS was both the fastest and most accurate of the techniques, likely due to a combination of tactile feedback from the smartphone screen and the advantage of fewer refinement steps.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115148859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Development of a Common Factors Based Virtual Reality Therapy System for Remote Psychotherapy Applications 基于共同因素的虚拟现实远程心理治疗系统的开发
Christopher Tacca, B. Kerr, Elizabeth Friis
In person psychotherapy can be inaccessible to many, particularly isolated populations. Remote psychotherapy has been proposed as a more accessible alternative. However, certain limitations in the current solutions including providing a restorative therapeutic environment and therapeutic alliance have meant that many other people are left behind and do not receive adequate treatment. A common factors based VR and EEG remote psychotherapy system can make remote psychotherapy more accessible and effective for people in which current options are not sufficient.
许多人,特别是孤立的人群,无法获得面对面的心理治疗。远程心理治疗被认为是一种更容易获得的替代方法。然而,目前解决方案的某些局限性,包括提供恢复性治疗环境和治疗联盟,意味着许多其他人被抛在后面,没有得到适当的治疗。基于共同因素的虚拟现实和脑电图远程心理治疗系统可以使远程心理治疗更容易获得和有效的人,目前的选择是不够的。
{"title":"The Development of a Common Factors Based Virtual Reality Therapy System for Remote Psychotherapy Applications","authors":"Christopher Tacca, B. Kerr, Elizabeth Friis","doi":"10.1109/VRW55335.2022.00100","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00100","url":null,"abstract":"In person psychotherapy can be inaccessible to many, particularly isolated populations. Remote psychotherapy has been proposed as a more accessible alternative. However, certain limitations in the current solutions including providing a restorative therapeutic environment and therapeutic alliance have meant that many other people are left behind and do not receive adequate treatment. A common factors based VR and EEG remote psychotherapy system can make remote psychotherapy more accessible and effective for people in which current options are not sufficient.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115661642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flick Typing: Toward A New XR Text Input System Based on 3D Gestures and Machine Learning 轻弹打字:迈向基于3D手势和机器学习的新XR文本输入系统
Tian Yang, Powen Yao, Michael Zyda
We propose a new text entry input method in Extended Reality that we call Flick Typing. Flick Typing utilizes the user's knowledge of a QWERTY keyboard layout, but does not explicitly provide visualization of the keys, and is agnostic to user posture or keyboard position. To type with Flick Typing, users will move their controller to where they think the target key is with respect to the controller's starting position and orientation, often with a simple flick of their wrists. Machine learning model is trained and used to adapt to the user's mental map of the keys in 3D space.
我们在扩展现实中提出了一种新的文本输入方法,我们称之为轻击输入。轻弹键入利用了用户对QWERTY键盘布局的了解,但没有明确地提供键的可视化,并且与用户的姿势或键盘位置无关。要使用轻弹打字,用户将移动他们的控制器到他们认为目标键的位置,相对于控制器的起始位置和方向,通常是简单地轻弹他们的手腕。机器学习模型被训练并用于适应用户在3D空间中的键的心理地图。
{"title":"Flick Typing: Toward A New XR Text Input System Based on 3D Gestures and Machine Learning","authors":"Tian Yang, Powen Yao, Michael Zyda","doi":"10.1109/VRW55335.2022.00295","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00295","url":null,"abstract":"We propose a new text entry input method in Extended Reality that we call Flick Typing. Flick Typing utilizes the user's knowledge of a QWERTY keyboard layout, but does not explicitly provide visualization of the keys, and is agnostic to user posture or keyboard position. To type with Flick Typing, users will move their controller to where they think the target key is with respect to the controller's starting position and orientation, often with a simple flick of their wrists. Machine learning model is trained and used to adapt to the user's mental map of the keys in 3D space.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116714246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rereading the Narrative Paradox for Virtual Reality Theatre 重新解读虚拟现实剧场的叙事悖论
Xiaotian Jiang, Xueni Pan, J. Freeman
We examined several key issues around audience autonomy in VR theatre. Informed by a literature review and a qualitative user study (grounded theory), we developed a conceptual model that enables a quantifiable evaluation of audience experience in VR theatre. A second user study inspired by the ‘narrative paradox’, investigates the relationship between spatial exploration and narrative comprehension in two VR performances. Our results show that although navigation distracted the participants from following the full story, they were more engaged, attached and had a better overall experience as a result of their freedom to move and interact.
我们研究了VR影院中围绕观众自主性的几个关键问题。通过文献综述和定性用户研究(扎根理论),我们开发了一个概念模型,可以对VR影院中的观众体验进行量化评估。第二个用户研究受到“叙事悖论”的启发,研究了两个VR表演中空间探索和叙事理解之间的关系。我们的研究结果表明,尽管导航会分散参与者的注意力,使他们无法跟随完整的故事,但由于他们可以自由移动和互动,他们会更投入、更依恋,并获得更好的整体体验。
{"title":"Rereading the Narrative Paradox for Virtual Reality Theatre","authors":"Xiaotian Jiang, Xueni Pan, J. Freeman","doi":"10.1109/VRW55335.2022.00299","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00299","url":null,"abstract":"We examined several key issues around audience autonomy in VR theatre. Informed by a literature review and a qualitative user study (grounded theory), we developed a conceptual model that enables a quantifiable evaluation of audience experience in VR theatre. A second user study inspired by the ‘narrative paradox’, investigates the relationship between spatial exploration and narrative comprehension in two VR performances. Our results show that although navigation distracted the participants from following the full story, they were more engaged, attached and had a better overall experience as a result of their freedom to move and interact.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116785415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
3Dify: Extruding Common 2D Charts with Timeseries Data 3Dify:用时间序列数据挤出常见的2D图表
R. Brath, Martin Matusiak
3D charts are not common in financial services. We review chart use in practice. We create 3D financial visualizations starting with 2D charts used extensively in financial services, then extend into the third dimension with timeseries data. We embed the 2D view into the the 3D scene; constrain interaction and add depth cues to facilitate comprehension. Usage and extensions indicate success.
3D图表在金融服务中并不常见。我们回顾了图表在实践中的应用。我们从金融服务中广泛使用的2D图表开始创建3D金融可视化,然后使用时间序列数据扩展到第三维度。我们将2D视图嵌入到3D场景中;约束交互并添加深度线索以促进理解。使用和扩展表示成功。
{"title":"3Dify: Extruding Common 2D Charts with Timeseries Data","authors":"R. Brath, Martin Matusiak","doi":"10.1109/VRW55335.2022.00154","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00154","url":null,"abstract":"3D charts are not common in financial services. We review chart use in practice. We create 3D financial visualizations starting with 2D charts used extensively in financial services, then extend into the third dimension with timeseries data. We embed the 2D view into the the 3D scene; constrain interaction and add depth cues to facilitate comprehension. Usage and extensions indicate success.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120850206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Seamless-walk: Novel Natural Virtual Reality Locomotion Method with a High-Resolution Tactile Sensor 无缝行走:采用高分辨率触觉传感器的新型自然虚拟现实运动方法
Yunho Choi, Hyeonchang Jeon, Sungha Lee, Isaac Han, Yiyue Luo, Seungjun Kim, W. Matusik, Kyung-Joong Kim
Natural movement is a challenging problem in virtual reality locomotion. However, existing foot-based locomotion methods lack naturalness due to physical limitations caused by wearing equipment. Therefore, in this study, we propose Seamless-walk, a novel virtual reality (VR) locomotion technique to enable locomotion in the virtual environment by walking on a high-resolution tactile carpet. The proposed Seamless-walk moves the user's virtual character by extracting the users' walking speed and orientation from raw tactile signals using machine learning techniques. We demonstrate that the proposed Seamless-walk is more natural and effective than existing VR locomotion methods by comparing them in VR game-playing tasks.
自然运动是虚拟现实运动中的一个具有挑战性的问题。然而,现有的基于脚的运动方法由于穿戴设备造成的物理限制而缺乏自然性。因此,在本研究中,我们提出了一种新的虚拟现实(VR)运动技术无缝行走(Seamless-walk),通过在高分辨率触觉地毯上行走,实现虚拟环境中的运动。提出的无缝行走通过使用机器学习技术从原始触觉信号中提取用户的行走速度和方向来移动用户的虚拟角色。通过在VR游戏任务中的比较,我们证明了所提出的无缝行走比现有的VR运动方法更自然、更有效。
{"title":"Seamless-walk: Novel Natural Virtual Reality Locomotion Method with a High-Resolution Tactile Sensor","authors":"Yunho Choi, Hyeonchang Jeon, Sungha Lee, Isaac Han, Yiyue Luo, Seungjun Kim, W. Matusik, Kyung-Joong Kim","doi":"10.1109/VRW55335.2022.00199","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00199","url":null,"abstract":"Natural movement is a challenging problem in virtual reality locomotion. However, existing foot-based locomotion methods lack naturalness due to physical limitations caused by wearing equipment. Therefore, in this study, we propose Seamless-walk, a novel virtual reality (VR) locomotion technique to enable locomotion in the virtual environment by walking on a high-resolution tactile carpet. The proposed Seamless-walk moves the user's virtual character by extracting the users' walking speed and orientation from raw tactile signals using machine learning techniques. We demonstrate that the proposed Seamless-walk is more natural and effective than existing VR locomotion methods by comparing them in VR game-playing tasks.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121065398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cloud-Based Cross-Platform Collaborative AR in Flutter 基于云的跨平台协作AR在Flutter
Lars Carius, Christian Eichhorn, D. A. Plecher, G. Klinker
Augmented Reality (AR) has progressed tremendously over the past years, enabling the creation of collaborative experiences and real-time environment tracking on smartphones. The strong tendency towards game engine-based approaches, however, has made it difficult for many businesses to utilize the potential of this technology. We present a novel collaborative AR framework aimed at lowering the entry barriers and operating expenses of AR applications. Our framework includes a cross-platform and cloud-based Flutter plugin combined with a web-based content management system allowing non-technical staff to take over operational tasks such as providing 3D models or moderating community annotations. To provide a state-of-the-art feature set, the AR Flutter plugin builds upon ARCore on Android and ARKit on iOS and unifies the two frameworks using an abstraction layer written in Dart. We show that the cross-platform AR Flutter plugin performs on the same level as native AR frameworks in terms of both application-level metrics and tracking-level qualities such as SLAM keyframes per second and area of tracked planes. Our contribution closes a gap in today's technological landscape by providing an AR framework seamlessly integrating with the familiar development process of cross-platform apps. With the accompanying content management system, AR can be used as a tool to achieve business objectives. The AR Flutter plugin is fully open-source, the code can be found at: https://github.com/CariusLars/ar_flutter_plugin.
增强现实(AR)在过去几年中取得了巨大进展,使智能手机上的协作体验和实时环境跟踪成为可能。然而,基于游戏引擎方法的强烈趋势使得许多企业难以利用这一技术的潜力。我们提出了一种新的协同AR框架,旨在降低AR应用的进入壁垒和运营成本。我们的框架包括一个跨平台和基于云的Flutter插件,结合了一个基于web的内容管理系统,允许非技术人员接管操作任务,如提供3D模型或调节社区注释。为了提供最先进的功能集,AR Flutter插件建立在Android上的ARCore和iOS上的ARKit之上,并使用Dart编写的抽象层统一了这两个框架。我们展示了跨平台AR Flutter插件在应用级指标和跟踪级质量(如每秒SLAM关键帧和跟踪平面面积)方面的性能与本地AR框架相同。我们的贡献缩小了当今技术领域的差距,提供了一个与熟悉的跨平台应用程序开发过程无缝集成的AR框架。通过附带的内容管理系统,AR可以用作实现业务目标的工具。AR Flutter插件是完全开源的,代码可以在https://github.com/CariusLars/ar_flutter_plugin上找到。
{"title":"Cloud-Based Cross-Platform Collaborative AR in Flutter","authors":"Lars Carius, Christian Eichhorn, D. A. Plecher, G. Klinker","doi":"10.1109/VRW55335.2022.00192","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00192","url":null,"abstract":"Augmented Reality (AR) has progressed tremendously over the past years, enabling the creation of collaborative experiences and real-time environment tracking on smartphones. The strong tendency towards game engine-based approaches, however, has made it difficult for many businesses to utilize the potential of this technology. We present a novel collaborative AR framework aimed at lowering the entry barriers and operating expenses of AR applications. Our framework includes a cross-platform and cloud-based Flutter plugin combined with a web-based content management system allowing non-technical staff to take over operational tasks such as providing 3D models or moderating community annotations. To provide a state-of-the-art feature set, the AR Flutter plugin builds upon ARCore on Android and ARKit on iOS and unifies the two frameworks using an abstraction layer written in Dart. We show that the cross-platform AR Flutter plugin performs on the same level as native AR frameworks in terms of both application-level metrics and tracking-level qualities such as SLAM keyframes per second and area of tracked planes. Our contribution closes a gap in today's technological landscape by providing an AR framework seamlessly integrating with the familiar development process of cross-platform apps. With the accompanying content management system, AR can be used as a tool to achieve business objectives. The AR Flutter plugin is fully open-source, the code can be found at: https://github.com/CariusLars/ar_flutter_plugin.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124905805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Time Reversal Symmetry Based Real-time Optical Motion Capture Missing Marker Recovery Method 一种基于时间反转对称的实时光学运动捕捉缺失标记恢复方法
Dongdong Weng, Yihan Wang, Dong Li
This paper proposes a deep learning model based on time reversal symmetry for real-time recovery of continuous missing marker sequences in optical motion capture. This paper firstly uses time reversal symmetry of human motion as a constraint of the model. BiLSTM is used to describe the constraint and extract the bidirectional spatiotemporal features. This paper proposes a weight position loss function for model training, which describes the effect of different joints on the pose. Compared with the existing methods, the experimental results show that the proposed method has higher accuracy and good real-time performance.
提出了一种基于时间反转对称性的深度学习模型,用于光学运动捕捉中连续缺失标记序列的实时恢复。本文首先利用人体运动的时间反转对称性作为模型的约束条件。利用BiLSTM对约束进行描述,提取双向时空特征。本文提出了一个用于模型训练的权重位置损失函数,该函数描述了不同关节对姿态的影响。实验结果表明,与现有方法相比,该方法具有更高的精度和良好的实时性。
{"title":"A Time Reversal Symmetry Based Real-time Optical Motion Capture Missing Marker Recovery Method","authors":"Dongdong Weng, Yihan Wang, Dong Li","doi":"10.1109/VRW55335.2022.00237","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00237","url":null,"abstract":"This paper proposes a deep learning model based on time reversal symmetry for real-time recovery of continuous missing marker sequences in optical motion capture. This paper firstly uses time reversal symmetry of human motion as a constraint of the model. BiLSTM is used to describe the constraint and extract the bidirectional spatiotemporal features. This paper proposes a weight position loss function for model training, which describes the effect of different joints on the pose. Compared with the existing methods, the experimental results show that the proposed method has higher accuracy and good real-time performance.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123574798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[DC] Leveraging AR Cues towards New Navigation Assistant Paradigm [DC]利用AR线索实现新的导航助手范式
Yu Zhao
Extensive research has shown that the knowledge required to navigate an unfamiliar environment has been greatly reduced as many of the planning and decision-making tasks can be supplanted by the use of automated navigation systems. The progress in augmented reality (AR), particularly AR head-mounted displays (HMDs) foreshadows the prevalence of such devices as computational platforms of the future. AR displays open a new design space on navigational aids for solving this problem by superimposing virtual imagery over the environment. This dissertation abstract proposes a research agenda that investigates how to effectively leverage AR cues to help both navigation efficiency and spatial learning in walking scenarios.
广泛的研究表明,在不熟悉的环境中导航所需的知识已经大大减少,因为许多计划和决策任务可以被自动导航系统的使用所取代。增强现实(AR)技术的进步,尤其是AR头戴式显示器(hmd),预示着这类设备作为未来计算平台的普及。AR显示器通过在环境上叠加虚拟图像,为解决这一问题开辟了一个新的导航辅助设计空间。本论文摘要提出了一个研究议程,探讨如何有效地利用AR线索来帮助步行场景中的导航效率和空间学习。
{"title":"[DC] Leveraging AR Cues towards New Navigation Assistant Paradigm","authors":"Yu Zhao","doi":"10.1109/VRW55335.2022.00316","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00316","url":null,"abstract":"Extensive research has shown that the knowledge required to navigate an unfamiliar environment has been greatly reduced as many of the planning and decision-making tasks can be supplanted by the use of automated navigation systems. The progress in augmented reality (AR), particularly AR head-mounted displays (HMDs) foreshadows the prevalence of such devices as computational platforms of the future. AR displays open a new design space on navigational aids for solving this problem by superimposing virtual imagery over the environment. This dissertation abstract proposes a research agenda that investigates how to effectively leverage AR cues to help both navigation efficiency and spatial learning in walking scenarios.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114299457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using External Video to Attack Behavior-Based Security Mechanisms in Virtual Reality (VR) 利用外部视频攻击虚拟现实(VR)中基于行为的安全机制
Robert Miller, N. Banerjee, Sean Banerjee
As virtual reality (VR) systems become prevalent in domains such as healthcare and education, sensitive data must be protected from attacks. Password-based techniques are circumvented once an attacker gains access to the user's credentials. Behavior-based approaches are susceptible to attacks from malicious users who mimic the actions of a genuine user or gain access to the 3D trajectories. We investigate a novel attack where a malicious user obtains a 2D video of genuine user interacting in VR. We demonstrate that an attacker can extract 2D motion trajectories from the video and match them to 3D enrollment trajectories to defeat behavior-based VR security.
随着虚拟现实(VR)系统在医疗保健和教育等领域的普及,必须保护敏感数据免受攻击。一旦攻击者获得对用户凭证的访问权,基于密码的技术就会被绕过。基于行为的方法很容易受到恶意用户的攻击,这些恶意用户模仿真实用户的动作或获得3D轨迹的访问权限。我们调查了一种新的攻击,其中恶意用户获得了真实用户在VR中交互的2D视频。我们证明了攻击者可以从视频中提取2D运动轨迹,并将其与3D注册轨迹相匹配,以击败基于行为的VR安全。
{"title":"Using External Video to Attack Behavior-Based Security Mechanisms in Virtual Reality (VR)","authors":"Robert Miller, N. Banerjee, Sean Banerjee","doi":"10.1109/VRW55335.2022.00193","DOIUrl":"https://doi.org/10.1109/VRW55335.2022.00193","url":null,"abstract":"As virtual reality (VR) systems become prevalent in domains such as healthcare and education, sensitive data must be protected from attacks. Password-based techniques are circumvented once an attacker gains access to the user's credentials. Behavior-based approaches are susceptible to attacks from malicious users who mimic the actions of a genuine user or gain access to the 3D trajectories. We investigate a novel attack where a malicious user obtains a 2D video of genuine user interacting in VR. We demonstrate that an attacker can extract 2D motion trajectories from the video and match them to 3D enrollment trajectories to defeat behavior-based VR security.","PeriodicalId":326252,"journal":{"name":"2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122168558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1