首页 > 最新文献

2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)最新文献

英文 中文
AR Interfaces for Disocclusion—A Comparative Study AR界面用于咬合的比较研究
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00068
Shuqi Liao, Yuqi Zhou, V. Popescu
An important application of augmented reality (AR) is the design of interfaces that reveal parts of the real world to which the user does not have line of sight. The design space for such interfaces is vast, with many options for integrating the visualization of the occluded parts of the scene into the user's main view. This paper compares four AR interfaces for disocclusion: X-ray, Cutaway, Picture-in-picture, and Multiperspective. The interfaces are compared in a within-subjects study (N = 33) over four tasks: counting dynamic spheres, pointing to the direction of an occluded person, finding the closest object to a given object, and finding pairs of matching numbers. The results show that Cutaway leads to poor performance in tasks where the user needs to see both the occluder and the occludee; that Picture-in-picture and Multiperspective have a visualization comprehensiveness advantage over Cutaway and X-ray, but a disadvantage in terms of directional guidance; that X-ray has a task completion time disadvantage due to the visualization complexity; and that participants gave Cutaway and Picture-in-picture high, and Multiperspective and X-ray low usability scores.
增强现实(AR)的一个重要应用是界面设计,它可以显示用户无法看到的现实世界的部分。这种界面的设计空间是巨大的,有许多选项可以将场景被遮挡部分的可视化集成到用户的主视图中。本文比较了四种用于去咬合的AR接口:x射线、剖面图、图中图和多视角。在受试者内部研究(N = 33)中,通过四个任务对接口进行比较:计算动态球体,指向被遮挡者的方向,找到离给定物体最近的物体,以及找到匹配的数字对。结果表明,在用户需要同时看到遮挡物和被遮挡物的任务中,切割导致性能不佳;图中透视法和多角度透视法相比于剖面图和x射线透视法具有可视化全面性优势,但在定向引导方面存在劣势;由于可视化的复杂性,x射线在任务完成时间上存在劣势;参与者给剖面图和图中图的可用性得分很高,多视角和x射线的可用性得分很低。
{"title":"AR Interfaces for Disocclusion—A Comparative Study","authors":"Shuqi Liao, Yuqi Zhou, V. Popescu","doi":"10.1109/VR55154.2023.00068","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00068","url":null,"abstract":"An important application of augmented reality (AR) is the design of interfaces that reveal parts of the real world to which the user does not have line of sight. The design space for such interfaces is vast, with many options for integrating the visualization of the occluded parts of the scene into the user's main view. This paper compares four AR interfaces for disocclusion: X-ray, Cutaway, Picture-in-picture, and Multiperspective. The interfaces are compared in a within-subjects study (N = 33) over four tasks: counting dynamic spheres, pointing to the direction of an occluded person, finding the closest object to a given object, and finding pairs of matching numbers. The results show that Cutaway leads to poor performance in tasks where the user needs to see both the occluder and the occludee; that Picture-in-picture and Multiperspective have a visualization comprehensiveness advantage over Cutaway and X-ray, but a disadvantage in terms of directional guidance; that X-ray has a task completion time disadvantage due to the visualization complexity; and that participants gave Cutaway and Picture-in-picture high, and Multiperspective and X-ray low usability scores.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127823959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extended Depth-of-Field Projector using Learned Diffractive Optics 扩展景深投影仪使用所学的衍射光学
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00060
Yuqi Li, Q. Fu, W. Heidrich
Projector Depth-of-Field (DOF) refers to the projection range of projector images in focus. It is a crucial property of projectors in spatial augmented reality (SAR) applications since wide projector DOF can increase the effective projection area on the projection surfaces with large depth variances and thus reduce the number of projectors required. Existing state-of-the-art methods attempt to create all-in-focus displays by adopting either a deep deblurring network or light modulation. Unlike previous work that considers the optimization of the deblurring model and physic modulation separately, in this paper, we propose an end-to-end joint optimization method to learn a diffractive optical element (DOE) placed in front of a projector lens and a compensation network for deblurring. Using the desired image and the captured projection result image, the compensation network can directly output the compensated image for display. We evaluate the proposed method in physical simulation and with a real experimental prototype, showing that the proposed method can extend the projector DOF by a minor modification to the projector and thus superior to the normal projection with a shallow DOF. The compensation method is also compared with the state-of-the-art methods and shows the advance in radiometric compensation in terms of computational efficiency and image quality.
投影机的景深(DOF)是指投影机图像聚焦的投影范围。在空间增强现实(SAR)应用中,大的投影机DOF是投影机的一个重要特性,因为大的投影机DOF可以增加深度差异大的投影表面上的有效投影面积,从而减少所需的投影机数量。现有的最先进的方法试图通过采用深度去模糊网络或光调制来创建全聚焦显示。与以往分别考虑去模糊模型和物理调制的优化不同,本文提出了一种端到端联合优化方法来学习放置在投影仪镜头前的衍射光学元件(DOE)和去模糊补偿网络。补偿网络利用期望图像和捕获的投影结果图像,直接输出补偿后的图像用于显示。通过物理仿真和真实的实验样机对该方法进行了评价,结果表明,该方法可以通过对投影机进行微小的修改来扩展投影机的DOF,从而优于具有浅DOF的普通投影。该补偿方法还与最先进的方法进行了比较,并在计算效率和图像质量方面显示了辐射补偿的进步。
{"title":"Extended Depth-of-Field Projector using Learned Diffractive Optics","authors":"Yuqi Li, Q. Fu, W. Heidrich","doi":"10.1109/VR55154.2023.00060","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00060","url":null,"abstract":"Projector Depth-of-Field (DOF) refers to the projection range of projector images in focus. It is a crucial property of projectors in spatial augmented reality (SAR) applications since wide projector DOF can increase the effective projection area on the projection surfaces with large depth variances and thus reduce the number of projectors required. Existing state-of-the-art methods attempt to create all-in-focus displays by adopting either a deep deblurring network or light modulation. Unlike previous work that considers the optimization of the deblurring model and physic modulation separately, in this paper, we propose an end-to-end joint optimization method to learn a diffractive optical element (DOE) placed in front of a projector lens and a compensation network for deblurring. Using the desired image and the captured projection result image, the compensation network can directly output the compensated image for display. We evaluate the proposed method in physical simulation and with a real experimental prototype, showing that the proposed method can extend the projector DOF by a minor modification to the projector and thus superior to the normal projection with a shallow DOF. The compensation method is also compared with the state-of-the-art methods and shows the advance in radiometric compensation in terms of computational efficiency and image quality.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130909076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A study of the influence of AR on the perception, comprehension and projection levels of situation awareness AR对情境感知、理解和投射水平影响的研究
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00069
Camille Truong-Allié, Martin Herbeth, Alexis Paljic
In this work, we examine how Augmented Reality (AR) impacts user's situation awareness (SA) on elements secondary to an AR-assisted main task, i.e. not directly concerned by the main task. These secondary elements can still provide relevant information that we do not want the user to miss. A good understanding of user's awareness about them is therefore interesting, especially in a context of a daily use of AR, in which not all elements of user's environment are controlled. In this regard, we measured SA about secondary elements in an industrial workshop where the AR-assisted main task is a pedestrian navigation. We compared SA between three navigation guidance conditions: a paper map, a virtual path, and a virtual path with virtual cues about secondary elements. These secondary elements were either hazardous areas, for example, for mandatory helmets, or items which could be on user's path, for example, misplaced carts, boxes… We adapted an existing SA method evaluation to a real-world environment. With this method, participants were queried about their SA on three levels: perception, comprehension and projection about different items. We found that the use of AR decreased user's SA about secondary elements, and that this degradation mainly occurs at the perception level: with AR, participants are less likely to detect secondary elements. Participants still felt the most secure with AR and virtual cues about secondary elements.
在这项工作中,我们研究了增强现实(AR)如何影响用户对AR辅助主要任务的次要元素的情况感知(SA),即与主要任务不直接相关。这些次要元素仍然可以提供我们不希望用户错过的相关信息,因此,很好地了解用户对它们的意识是很有趣的,特别是在日常使用AR的背景下,在这种背景下,用户环境的所有元素都不是受控的。在这方面,我们在一个工业车间中测量了次要元素的SA,其中ar辅助的主要任务是行人导航。我们比较了三种导航引导条件下的SA:纸质地图、虚拟路径和带有次要元素虚拟线索的虚拟路径。这些次要元素要么是危险区域,例如,强制头盔,要么是可能在用户路径上的物品,例如,放错地方的手推车,箱子……我们将现有的SA方法评估调整到现实环境中。该方法从感知、理解和投射三个层面对被试的情景认知能力进行了调查。我们发现,AR的使用降低了用户对次要元素的SA,并且这种降低主要发生在感知层面:使用AR,参与者不太可能检测到次要元素。参与者仍然对AR和虚拟提示的次要元素感到最安全。
{"title":"A study of the influence of AR on the perception, comprehension and projection levels of situation awareness","authors":"Camille Truong-Allié, Martin Herbeth, Alexis Paljic","doi":"10.1109/VR55154.2023.00069","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00069","url":null,"abstract":"In this work, we examine how Augmented Reality (AR) impacts user's situation awareness (SA) on elements secondary to an AR-assisted main task, i.e. not directly concerned by the main task. These secondary elements can still provide relevant information that we do not want the user to miss. A good understanding of user's awareness about them is therefore interesting, especially in a context of a daily use of AR, in which not all elements of user's environment are controlled. In this regard, we measured SA about secondary elements in an industrial workshop where the AR-assisted main task is a pedestrian navigation. We compared SA between three navigation guidance conditions: a paper map, a virtual path, and a virtual path with virtual cues about secondary elements. These secondary elements were either hazardous areas, for example, for mandatory helmets, or items which could be on user's path, for example, misplaced carts, boxes… We adapted an existing SA method evaluation to a real-world environment. With this method, participants were queried about their SA on three levels: perception, comprehension and projection about different items. We found that the use of AR decreased user's SA about secondary elements, and that this degradation mainly occurs at the perception level: with AR, participants are less likely to detect secondary elements. Participants still felt the most secure with AR and virtual cues about secondary elements.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"15 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113964439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualization and Graphics Technical Committee (VGTC) Statement 可视化和图形技术委员会(VGTC)声明
Pub Date : 2023-03-01 DOI: 10.1109/vr55154.2023.00007
{"title":"Visualization and Graphics Technical Committee (VGTC) Statement","authors":"","doi":"10.1109/vr55154.2023.00007","DOIUrl":"https://doi.org/10.1109/vr55154.2023.00007","url":null,"abstract":"","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116626531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simultaneous Scene-independent Camera Localization and Category-level Object Pose Estimation via Multi-level Feature Fusion 基于多层次特征融合的场景无关相机定位和类别级目标姿态估计
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00041
Wang Junyi, Yue Qi
In AR/MR applications, camera localization and object pose estimation both play crucial roles. The universality of learning techniques, often referred to as scene-independent localization and category-level pose estimation, presents challenges for both tasks. The two missions maintain close relationships due to the spatial geometry constraint, but differing task requirements result in distinct feature extraction. In this paper, we focus on simultaneous scene-independent camera localization and category-level object pose estimation with a unified learning framework. The system consists of a localization branch called SLO-LocNet, a pose estimation branch called SLO-ObjNet, a feature fusion module for feature sharing between two tasks, and two decoders for creating coordinate maps. In SLO-LocNet, localization features are produced for anticipating the relative pose between two adjusted frames using inputs of color and depth images. Furthermore, we establish an image fusion module in order to promote feature sharing in depth and color branches. With SLO-ObjNet, we take the detected depth image and its corresponding point cloud as inputs, and produce object pose features for pose estimation. A geometry fusion module is created to combine depth and point cloud information simultaneously. Between the two tasks, the image fusion module is also exploited to accomplish feature sharing. In terms of the loss function, we present a mixed optimization function that is composed of the relative camera pose, geometry constraint, absolute and relative object pose terms. To verify how well our algorithm could perform, we conduct experiments on both localization and pose estimation datasets, covering 7 Scenes, ScanNet, REAL275 and YCB-Video. All experiments demonstrate superior performance to other existing methods. We specifically train the network on ScanNet and test it on 7 Scenes to demonstrate the universality performance. Additionally, the positive effects of fusion modules and loss function are also demonstrated.
在AR/MR应用中,相机定位和物体姿态估计都起着至关重要的作用。学习技术的普遍性,通常被称为场景无关定位和类别级姿态估计,对这两项任务都提出了挑战。由于空间几何的限制,这两个任务保持着密切的关系,但不同的任务要求导致了不同的特征提取。在本文中,我们重点研究了基于统一学习框架的场景无关相机定位和类别级目标姿态估计。该系统由定位分支(SLO-LocNet)、姿态估计分支(SLO-ObjNet)、特征融合模块(用于两个任务之间的特征共享)和两个解码器(用于创建坐标地图)组成。在lo - locnet中,使用颜色和深度图像的输入来产生定位特征,以预测两个调整帧之间的相对姿态。此外,我们建立了图像融合模块,以促进深度和颜色分支的特征共享。利用SLO-ObjNet,我们将检测到的深度图像及其对应的点云作为输入,生成用于姿态估计的目标姿态特征。创建了一个几何融合模块,同时结合深度和点云信息。在这两项任务之间,还利用图像融合模块实现特征共享。在损失函数方面,我们提出了一个由相对相机位姿、几何约束、绝对和相对物体位姿项组成的混合优化函数。为了验证我们的算法的性能,我们在定位和姿态估计数据集上进行了实验,包括7个场景,ScanNet, REAL275和YCB-Video。实验结果表明,该方法的性能优于其他现有方法。我们在ScanNet上对网络进行了专门的训练,并在7个场景上进行了测试,以证明其通用性。此外,还证明了融合模块和损失函数的积极作用。
{"title":"Simultaneous Scene-independent Camera Localization and Category-level Object Pose Estimation via Multi-level Feature Fusion","authors":"Wang Junyi, Yue Qi","doi":"10.1109/VR55154.2023.00041","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00041","url":null,"abstract":"In AR/MR applications, camera localization and object pose estimation both play crucial roles. The universality of learning techniques, often referred to as scene-independent localization and category-level pose estimation, presents challenges for both tasks. The two missions maintain close relationships due to the spatial geometry constraint, but differing task requirements result in distinct feature extraction. In this paper, we focus on simultaneous scene-independent camera localization and category-level object pose estimation with a unified learning framework. The system consists of a localization branch called SLO-LocNet, a pose estimation branch called SLO-ObjNet, a feature fusion module for feature sharing between two tasks, and two decoders for creating coordinate maps. In SLO-LocNet, localization features are produced for anticipating the relative pose between two adjusted frames using inputs of color and depth images. Furthermore, we establish an image fusion module in order to promote feature sharing in depth and color branches. With SLO-ObjNet, we take the detected depth image and its corresponding point cloud as inputs, and produce object pose features for pose estimation. A geometry fusion module is created to combine depth and point cloud information simultaneously. Between the two tasks, the image fusion module is also exploited to accomplish feature sharing. In terms of the loss function, we present a mixed optimization function that is composed of the relative camera pose, geometry constraint, absolute and relative object pose terms. To verify how well our algorithm could perform, we conduct experiments on both localization and pose estimation datasets, covering 7 Scenes, ScanNet, REAL275 and YCB-Video. All experiments demonstrate superior performance to other existing methods. We specifically train the network on ScanNet and test it on 7 Scenes to demonstrate the universality performance. Additionally, the positive effects of fusion modules and loss function are also demonstrated.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":" 32","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113951888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Development of a Mixed Reality Acupuncture Training System 混合现实针灸训练系统的设计与开发
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00042
Qilei Sun, Jiayou Huang, Haodong Zhang, Paul Craig, Lingyun Yu, Eng Gee Lim
This paper looks at how mixed reality can be used for the improvement and enhancement of Chinese acupuncture practice through the introduction of an acupuncture training simulator. A prototype system developed for our study allows practitioners to insert virtual needles using their bare hands into a full-scale 3D representation of the human body with labelled acupuncture points. This provides them with a safe and natural environment to develop their acupuncture skills simulating the actual physical process of acupuncture. It also helps them to develop their muscle memory for acupuncture and better develops their memory of acupuncture points through a more immersive learning experience. We describe some of the design decisions and technical challenges overcome in the development of our system. We also present the results of a comparative user evaluation with potential users aimed at assessing the viability of such a mixed reality system being used as part of their training and development. The results of our evaluation reveal the training system outperformed in the enhancement of spatial understanding as well as improved learning and dexterity in acupuncture practice. These results go some way to demonstrating the potential of mixed reality for improving practice in therapeutic medicine.
本文通过引入针灸训练模拟器来研究如何将混合现实用于改善和加强中国针灸实践。为我们的研究开发的一个原型系统允许从业者用他们的徒手将虚拟针插入带有标记的穴位的人体的全尺寸3D表示。这为他们提供了一个安全、自然的环境来发展他们的针灸技能,模拟针灸的实际物理过程。它还可以帮助他们发展对针灸的肌肉记忆,并通过更身临其境的学习体验更好地发展他们对穴位的记忆。我们描述了一些设计决策和在我们的系统开发中克服的技术挑战。我们还介绍了与潜在用户进行比较的用户评估结果,目的是评估这种混合现实系统作为其培训和发展的一部分的可行性。我们的评估结果显示,训练系统在增强空间理解和提高学习和灵巧的针灸实践中表现出色。这些结果在某种程度上证明了混合现实在改善治疗医学实践方面的潜力。
{"title":"Design and Development of a Mixed Reality Acupuncture Training System","authors":"Qilei Sun, Jiayou Huang, Haodong Zhang, Paul Craig, Lingyun Yu, Eng Gee Lim","doi":"10.1109/VR55154.2023.00042","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00042","url":null,"abstract":"This paper looks at how mixed reality can be used for the improvement and enhancement of Chinese acupuncture practice through the introduction of an acupuncture training simulator. A prototype system developed for our study allows practitioners to insert virtual needles using their bare hands into a full-scale 3D representation of the human body with labelled acupuncture points. This provides them with a safe and natural environment to develop their acupuncture skills simulating the actual physical process of acupuncture. It also helps them to develop their muscle memory for acupuncture and better develops their memory of acupuncture points through a more immersive learning experience. We describe some of the design decisions and technical challenges overcome in the development of our system. We also present the results of a comparative user evaluation with potential users aimed at assessing the viability of such a mixed reality system being used as part of their training and development. The results of our evaluation reveal the training system outperformed in the enhancement of spatial understanding as well as improved learning and dexterity in acupuncture practice. These results go some way to demonstrating the potential of mixed reality for improving practice in therapeutic medicine.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130849558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CoboDeck: A Large-Scale Haptic VR System Using a Collaborative Mobile Robot CoboDeck:使用协作移动机器人的大规模触觉VR系统
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00045
Soroosh Mortezapoor, Khrystyna Vasylevska, Emanuel Vonach, H. Kaufmann
We present CoboDeck - our proof-of-concept immersive virtual reality haptic system with free walking support. It provides prop-based encountered-type haptic feedback with a mobile robotic platform. Intended for use as a design tool for architects, it enables the user to directly and intuitively interact with virtual objects like walls, doors, or furniture. A collaborative robotic arm mounted on an omnidirectional mobile platform can present a physical prop that matches the position and orientation of a virtual counterpart anywhere in large virtual and real environments. We describe the concept, hardware, and software architecture of our system. Furthermore, we present the first behavioral algorithm tailored for the unique challenges of safe human-robot haptic interaction in VR, explicitly targeting availability and safety while the user is unaware of the robot and can change trajectory at any time. We explain our high-level state machine that controls the robot to follow a user closely and rapidly escape from him as required by the situation. We present our technical evaluation. The results suggest that our chasing approach saves time, decreases the travel distance and thus battery usage, compared to more traditional approaches for mobile platforms assuming a fixed parking position between interactions. We also show that the robot can escape from the user and prevent a possible collision within a mean time of 1.62 s. Finally, we confirm the validity of our approach in a practical validation and discuss the potential of the proposed system.
我们提出CoboDeck -我们的概念验证沉浸式虚拟现实触觉系统与自由行走的支持。它通过移动机器人平台提供基于道具的遭遇式触觉反馈。作为建筑师的设计工具,它使用户能够直接直观地与虚拟对象(如墙壁,门或家具)进行交互。安装在全向移动平台上的协作机械臂可以在大型虚拟和真实环境的任何地方呈现与虚拟对手的位置和方向相匹配的物理支柱。我们描述了系统的概念、硬件和软件架构。此外,我们提出了第一个针对VR中安全人机触觉交互的独特挑战量身定制的行为算法,明确针对用户不知道机器人的可用性和安全性,并且可以随时改变轨迹。我们解释了我们的高级状态机,它控制机器人紧跟用户,并根据情况的需要迅速逃离用户。我们提出我们的技术评估。结果表明,与传统的移动平台方法相比,我们的追赶方法节省了时间,减少了行驶距离,从而减少了电池的使用,而传统的移动平台方法在交互之间假设了固定的停车位置。我们还表明,机器人可以在平均1.62秒内逃离用户并防止可能的碰撞。最后,我们在实际验证中验证了我们方法的有效性,并讨论了所提出系统的潜力。
{"title":"CoboDeck: A Large-Scale Haptic VR System Using a Collaborative Mobile Robot","authors":"Soroosh Mortezapoor, Khrystyna Vasylevska, Emanuel Vonach, H. Kaufmann","doi":"10.1109/VR55154.2023.00045","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00045","url":null,"abstract":"We present CoboDeck - our proof-of-concept immersive virtual reality haptic system with free walking support. It provides prop-based encountered-type haptic feedback with a mobile robotic platform. Intended for use as a design tool for architects, it enables the user to directly and intuitively interact with virtual objects like walls, doors, or furniture. A collaborative robotic arm mounted on an omnidirectional mobile platform can present a physical prop that matches the position and orientation of a virtual counterpart anywhere in large virtual and real environments. We describe the concept, hardware, and software architecture of our system. Furthermore, we present the first behavioral algorithm tailored for the unique challenges of safe human-robot haptic interaction in VR, explicitly targeting availability and safety while the user is unaware of the robot and can change trajectory at any time. We explain our high-level state machine that controls the robot to follow a user closely and rapidly escape from him as required by the situation. We present our technical evaluation. The results suggest that our chasing approach saves time, decreases the travel distance and thus battery usage, compared to more traditional approaches for mobile platforms assuming a fixed parking position between interactions. We also show that the robot can escape from the user and prevent a possible collision within a mean time of 1.62 s. Finally, we confirm the validity of our approach in a practical validation and discuss the potential of the proposed system.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125552095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
How Do I Get There? Overcoming Reachability Limitations of Constrained Industrial Environments in Augmented Reality Applications 我怎么去那里?在增强现实应用中克服受限工业环境的可达性限制
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00027
Daniel Bambusek, Zdenek Materna, Michal Kapinus, V. Beran, P. Smrz
The paper presents an approach for handheld augmented reality in constrained industrial environments, where it might be hard or even impossible to reach certain poses within a workspace. Therefore, a user might be unable to see or interact with some digital content in applications like visual robot programming, robotic program visualizations, or workspace annotation. To overcome this limitation, we propose a temporal switching to a non-immersive virtual reality that allows the user to see the virtual counterpart of the workspace from any angle and distance, where the viewpoint is controlled using a unique combination of on-screen controls complemented by the physical motion of the handheld device. Using such a combination, the user can position the virtual camera roughly to the desired pose using the on-screen controls and then continue working just as in augmented reality. To explore how people would use it and what the benefits would be over pure augmented reality, we chose a representative task of object alignment and conducted a study. The results revealed that mainly physical demands, which is often a limiting factor for handheld augmented reality, could be reduced and that the usability and utility of the approach are rated as high. In addition, suggestions for improving the user interface were proposed and discussed.
本文提出了一种在受限制的工业环境中手持式增强现实的方法,在这种环境中,在工作空间内可能很难甚至不可能达到某些姿势。因此,用户可能无法在可视化机器人编程、机器人程序可视化或工作空间注释等应用程序中看到某些数字内容或与之交互。为了克服这一限制,我们建议暂时切换到非沉浸式虚拟现实,允许用户从任何角度和距离看到工作空间的虚拟对应,其中视点使用屏幕控制的独特组合来控制,并辅以手持设备的物理运动。使用这样的组合,用户可以使用屏幕上的控制将虚拟相机大致定位到所需的姿势,然后像增强现实一样继续工作。为了探索人们将如何使用它,以及它比纯粹的增强现实有什么好处,我们选择了一个具有代表性的对象对齐任务并进行了研究。结果显示,主要是物理需求,这通常是手持增强现实的限制因素,可以减少,并且该方法的可用性和实用性被评为高。此外,还对用户界面的改进提出了建议和讨论。
{"title":"How Do I Get There? Overcoming Reachability Limitations of Constrained Industrial Environments in Augmented Reality Applications","authors":"Daniel Bambusek, Zdenek Materna, Michal Kapinus, V. Beran, P. Smrz","doi":"10.1109/VR55154.2023.00027","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00027","url":null,"abstract":"The paper presents an approach for handheld augmented reality in constrained industrial environments, where it might be hard or even impossible to reach certain poses within a workspace. Therefore, a user might be unable to see or interact with some digital content in applications like visual robot programming, robotic program visualizations, or workspace annotation. To overcome this limitation, we propose a temporal switching to a non-immersive virtual reality that allows the user to see the virtual counterpart of the workspace from any angle and distance, where the viewpoint is controlled using a unique combination of on-screen controls complemented by the physical motion of the handheld device. Using such a combination, the user can position the virtual camera roughly to the desired pose using the on-screen controls and then continue working just as in augmented reality. To explore how people would use it and what the benefits would be over pure augmented reality, we chose a representative task of object alignment and conducted a study. The results revealed that mainly physical demands, which is often a limiting factor for handheld augmented reality, could be reduced and that the usability and utility of the approach are rated as high. In addition, suggestions for improving the user interface were proposed and discussed.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122213157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCP-SLAM: Accelerating DynaSLAM With Static Confidence Propagation SCP-SLAM:用静态信心传播加速DynaSLAM
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00066
Ming-Fei Yu, Lei Zhang, Wu-Fan Wang, Jiahui Wang
DynaSLAM is the state-of-the-art visual simultaneous localization and mapping (SLAM) in dynamic environments. It adopts a convolutional neural network (CNN) for moving object detection, but usually incurs a very high computational cost because it performs semantic segmentation using the CNN model on every frame. This paper proposes SCP-SLAM, which accelerates DynaSLAM by running the CNN only on keyframes and propagating static confidence through other frames in parallel. The proposed static confidence characterizes the moving object features by the residual defined by inter-frame geometry transformation, which can be computed quickly. Our method combines the effectiveness of a CNN with the efficiency of static confidence in a tightly coupled manner. Extensive experiments on the publicly available TUM and Bonn RGB-D dynamic benchmark datasets demonstrate the efficacy of the method. Compared with DynaSLAM, it enables acceleration by a factor of ten on average, but retains comparable localization accuracy.
DynaSLAM是动态环境中最先进的视觉同步定位和绘图(SLAM)。它采用卷积神经网络(convolutional neural network, CNN)进行运动目标检测,但由于它是在每一帧上使用CNN模型进行语义分割,通常会产生非常高的计算成本。本文提出了SCP-SLAM算法,该算法通过只在关键帧上运行CNN,并在其他帧上并行传播静态置信度来加速DynaSLAM。所提出的静态置信度通过帧间几何变换定义的残差来表征运动目标的特征,可以快速计算得到。我们的方法以紧密耦合的方式结合了CNN的有效性和静态置信度的效率。在公开可用的TUM和波恩RGB-D动态基准数据集上进行的大量实验证明了该方法的有效性。与DynaSLAM相比,它可以实现平均十倍的加速,但仍保持相当的定位精度。
{"title":"SCP-SLAM: Accelerating DynaSLAM With Static Confidence Propagation","authors":"Ming-Fei Yu, Lei Zhang, Wu-Fan Wang, Jiahui Wang","doi":"10.1109/VR55154.2023.00066","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00066","url":null,"abstract":"DynaSLAM is the state-of-the-art visual simultaneous localization and mapping (SLAM) in dynamic environments. It adopts a convolutional neural network (CNN) for moving object detection, but usually incurs a very high computational cost because it performs semantic segmentation using the CNN model on every frame. This paper proposes SCP-SLAM, which accelerates DynaSLAM by running the CNN only on keyframes and propagating static confidence through other frames in parallel. The proposed static confidence characterizes the moving object features by the residual defined by inter-frame geometry transformation, which can be computed quickly. Our method combines the effectiveness of a CNN with the efficiency of static confidence in a tightly coupled manner. Extensive experiments on the publicly available TUM and Bonn RGB-D dynamic benchmark datasets demonstrate the efficacy of the method. Compared with DynaSLAM, it enables acceleration by a factor of ten on average, but retains comparable localization accuracy.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124284569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Level-of-Detail AR: Dynamically Adjusting Augmented Reality Level of Detail Based on Visual Angle 细节水平AR:基于视角动态调整增强现实细节水平
Pub Date : 2023-03-01 DOI: 10.1109/VR55154.2023.00022
Abby Wysopal, Vivian Ross, Joyce E Passananti, K. Yu, Brandon Huynh, Tobias Höllerer
Dynamically adjusting the content of augmented reality (AR) applications to efficiently display information best fitting the available screen estate may be important for user performance and satisfaction. Currently, there is not a common practice for dynamically adjusting the content of AR applications based on their apparent size in the user's view of the surround environment. We present a Level-of-Detail AR mechanism to improve the usability of AR applications at any relative size. Our mechanism dynamically renders textual and interactable content based on its legibility, interactability, and viewability respectively. When tested, Level-of-Detail AR functioned as intended out-of-the-box on 44 of the 45 standard user interface Unity prefabs in Microsoft's Mixed Reality Tool Kit. We additionally evaluated impact on task performance, user distance, and subjective satisfaction through a mixed-design user study with 45 participants. Statistical analysis of our results revealed significant task-dependent differences in user performance between the modes. User satisfaction was consistently higher for the Level-of-Detail AR condition.
动态调整增强现实(AR)应用程序的内容,以有效地显示最适合可用屏幕的信息,这对于用户性能和满意度非常重要。目前,还没有一种常见的做法,可以根据AR应用程序在用户周围环境视图中的明显大小来动态调整其内容。我们提出了一个细节级AR机制,以提高任何相对大小的AR应用程序的可用性。我们的机制分别基于可读性、交互性和可视性动态呈现文本内容和可交互内容。经过测试,Level-of-Detail AR在微软混合现实工具包的45个标准用户界面Unity预装件中的44个上都能正常运行。我们还通过45名参与者的混合设计用户研究评估了对任务绩效、用户距离和主观满意度的影响。对我们的结果进行统计分析,揭示了两种模式之间用户表现的显著任务依赖性差异。在细节级AR条件下,用户满意度始终较高。
{"title":"Level-of-Detail AR: Dynamically Adjusting Augmented Reality Level of Detail Based on Visual Angle","authors":"Abby Wysopal, Vivian Ross, Joyce E Passananti, K. Yu, Brandon Huynh, Tobias Höllerer","doi":"10.1109/VR55154.2023.00022","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00022","url":null,"abstract":"Dynamically adjusting the content of augmented reality (AR) applications to efficiently display information best fitting the available screen estate may be important for user performance and satisfaction. Currently, there is not a common practice for dynamically adjusting the content of AR applications based on their apparent size in the user's view of the surround environment. We present a Level-of-Detail AR mechanism to improve the usability of AR applications at any relative size. Our mechanism dynamically renders textual and interactable content based on its legibility, interactability, and viewability respectively. When tested, Level-of-Detail AR functioned as intended out-of-the-box on 44 of the 45 standard user interface Unity prefabs in Microsoft's Mixed Reality Tool Kit. We additionally evaluated impact on task performance, user distance, and subjective satisfaction through a mixed-design user study with 45 participants. Statistical analysis of our results revealed significant task-dependent differences in user performance between the modes. User satisfaction was consistently higher for the Level-of-Detail AR condition.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124101129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1